With the help of Ben Schrekinger.
The little guys of AI are joining the Washington influence game.
Tech giants and defense contractors have long dominated AI lobbies, seeking both money and favorable rules. And while the biggest companies still dominate the debate, pending legislation in Congress that aims to lead China on innovation, along with proposed data privacy bills, has seen a spike in lobbying caused by smaller AI companies. players.
A number of companies focusing on robotics, drones and self-driving cars are all setting up their own Washington influencing machines, empowering them to shape the future of AI policy as they see fit.
Much of it is spurred on by one major piece of legislation: the Bipartisan Innovation Act, commonly called USICA — an acronym for the previous title, and its aim to outsmart China.
A tech lobbyist, who granted anonymity to speak candidly, called AI lobbying at USICA a “gold rush.” If passed as it is now written, the bill will generate approximately $50 billion in additional research spending over the next five years. Senate Majority Leader Chuck Schumer has touted USICA, formerly known as the United States Innovation and Competition Act, as the best response to China’s technological dominance.
Robotics company iRobot registered The Vogel Group, a major DC company led by former GOP leadership assistant Alex Vogel, to lobby for the bill. Argo AI, an autonomous driving technology company, has deployed its internal lobbyists — including former Rep. Debbie Dingell (D-Mich.) and former legislative assistant to Sen. Lindsey Graham (RS.C.) – to lobby on supply chain issues within USICA.
Ryan Hagemann, co-director of the IBM Policy Lab, said “the most attention” in the AI space is currently focused on USICA legislation.
But the expansion of lobbying goes far beyond USICA, and it is about more than chasing government subsidies.
The most recent version of the U.S. Data Privacy and Protection Act and the Algorithmic Accountability Law, to propose government-mandated “impact assessments” for all companies using algorithms. That means companies suddenly have to hand over audits of their technology to regulators, a lengthy process that some companies argue should only be performed by companies that produce “risky” AI, such as facial recognition technology used by the police to catch criminals, versus Low-risk AI such as chatbots. For example, IBM argues that it should not conduct the same kind of impact assessments on its general-purpose AI systems as companies training AI on their own data sets.
“It’s not about who should carry out the impact assessments, but when the impact assessment should take place,” says Hagemann.
Merve Hickok, senior of the Center for AI and Digital Policy, a nonprofit committed to digital rights, says the stakes are high. Only a handful of companies should file algorithmic audits if their lobbying is effective.
“You see a lot of companies — not just big tech companies, but also some industry groups — pushing and lobbying against these commitments,” Hickok said, pointing to the efforts underway in Europe.
The definition of what “AI” is is vague in the first place. But many of the companies using AI to operate their technology, such as drone companies, are gearing up for a bumpy ride in Washington. Drone company Skydio, which was seeking more funding for a Federal Aviation Administration training initiative and Defense Department drone acquisitions, nearly doubled its lobbying spending from $160,000 in 2020 to $304,000 in 2021. Shield AI, which creates artificial intelligence operating drones for military operations went from spending $65,000 on lobbying in 2020 to more than $1.5 million in 2021, a number on track to exceed this year. Skydio declined to comment and Shield AI did not respond to a request for comment.
Meanwhile, facial recognition companies like Clearview AI are fighting bills that would pause use of the technology, such as the Facial Recognition and Biometric Technology Moratorium Acta. Clearview AI, which has received huge criticism from lawmakers for its controversial facial recognition technology, has spent $120,000 on lobbying in 2021 after it first registered lobbyists in May 2021.
Hickok pointed out that AI lobbying in the US is still dominated by big companies like Google and Amazon, even with the proliferation of smaller companies registering to lobby. Hickock said that because the US has not passed significant AI regulation, it has “become a test bed, while the companies reap the benefits”.
The financial crisis in crypto markets continues today, with a British Virgin Islands court ordering the liquidation of crypto hedge fund Three Arrows Capital.
POLITICS Sam Sutton reports: that two executives from politically linked consultancy Teneo will oversee that process.
For investors curious about how and why the fund got to this point – and concerned about what could further destabilize crypto markets – a: new report today, on-chain analytics firm Nansen is tracing some of the interconnected moves. “Dominoes are falling,” Nansen researcher Andrew Thurman summed it up in an email.
The report highlights the role of staked Ether, a derivative of Ether, the second largest cryptocurrency, issued by Lido Finance. (Staked Ether is not the currency itself, but rather a token that can be exchanged for Ether after the Ethereum network is completing an intricate upgrade process.) When times were running rampant, the market treated the expanded Ether as if it were as good as Ether. But last month, when the algorithmic stablecoin TerraLuna melted, staked Ether started trading at a discount to the real thing.
Three Arrows had invested in Luna as well as Ether; after TerraLuna’s meltdown, it sold its staked Ether at a loss, Thurman said, and was ultimately unable to recover.
Despite fears of further contagion caused by the demise of Three Arrows, the market may now be taking a breather. Thurman said the on-chain positions of cryptocurrency lender Celsius — which recently raised concerns by suspending withdrawals — have improved and that emergency measures from Lido appear to have calmed investors.
– Ben Schreckinger
a new GAO Report on Government Use of Facial Recognition tech found that a slew of federal and state agencies use facial recognition. The GAO found that most of these agencies have not assessed the privacy risks of facial recognition. Fourteen agencies, ranging from NASA to the Department of Justice, are using facial recognition to unlock agency-issued smartphones. It’s a sign that facial recognition has become so commonplace that it’s taken for granted, leading agencies to use it without fully analyzing its implications.
– Konstantin Kakaes
– Popular pregnancy and ovulation tracking apps reserve the right to transfer user data to law enforcement officers, a Forbes Analysis found it.
– Are also advanced technologies? hard to scale†
– A professor of finance offers a world-historical way to think about blockchains†
– It is possible that AI can produce ideas
– What does “human-oriented” AI . mean? actually looks like†