Big tech is flexing its muscle to try and ‘water down’ California’s AI regulation – here’s why that’s a problem


California’s AI bill is set to move forward with several amendments made under the direction of tech companies, prompting upset among some industry experts who feel this undermines the legislation’s strength. 

Scott Wiener – the senator behind the bill – announced amendments in a statement via his office last week, stating that the bill had been passed by the Assembly Appropriations Committee.

Wiener acknowledged that a large number of the changes made to the bill reflected and addressed concerns expressed by major players such as Anthropic and “many others in the industry.”

“While the amendments do not reflect 100% of the changes requested by Anthropic—a world leader on both innovation and safety—we accepted a number of very reasonable amendments proposed,” Wiener said. 

The amendments can be broken down into five core components, and while they don’t actively advance the fortunes of AI companies, they do make their lives easier.

For example, one amendment will replace criminal penalties for perjury with civil ones, meaning the act now invokes no criminal charges for any violation and the legal repercussions of stepping out of line are significantly reduced. 

The amended bill now also cuts the Attorney General’s ability to seek civil penalties “unless a harm has occurred or there is an imminent threat to public safety.” The amended bill makes the Attorney General’s office the main regulatory body for AI, rather than an originally proposed state regulatory body. 

There are other amendments, though the general move caters more to big tech and has understandably irked some industry experts. Bruna de Castro e Silva, AI governance specialist at Saidot, said industry meddling in proposed AI regulation should raise serious concerns over the long-term impact of any such legislation.

“These amendments not only advance the corporate interests of big tech companies, but also undermine the fundamental principle of AI governance as a practice that must be carried out continuously throughout the product lifecycle,” Silva told ITPro

“Companies like Anthropic have played a significant role in watering down these regulations, leveraging their influence to shift the focus away from stringent pre-release testing and oversight,” she added.  

California is creating the wrong kind of legislation 

According to Silva, the newly amended bill – also known as SB 1047 – misses the mark when it comes to AI regulation because it cannot preemptively penalize AI developers for problems.

“The original intent of the bill was to establish a proactive, risk-based framework, as first introduced in the EU AI Act, to ensure that AI products are safe before being released to the public,” she said.

“However, this revised bill encourages a reactive, ex-post approach that addresses safety only after damage has occurred.”

The issue here is that legislators will only have the power to clean up the mess after it has been made, rather than preventing the mess in the first place. In the context of AI, it’s hard to know how damaging this could be. 

The technology has not been in widespread commercial use and development long enough for experts or the wider industry to know how catastrophic a badly or maliciously designed AI system could be. 

“By limiting liability to cases of ‘real harm’ or ‘imminent danger,’ Silicon Valley risks creating an environment where innovation and corporate interests take precedence over public welfare and the protection of human rights,” Silva said.  

“AI safety can’t be an afterthought; it must be embedded in the development process from the outset,” she added. 




Source link

Exit mobile version