California’s controversial AI safety bill has passed a major hurdle, bringing the industry one step closer to comply with safety testing for the most advanced AI models.
SB 1047 was passed following a 41-9 vote in the California legislature, though still requires approval — or a formal veto — from Governor Gavin Newsom before the end of September. Newsom hasn’t yet indicated a position on the bill.
If signed into law, the bill would require safety testing on AI systems that cost above $100 million to develop or reach a high threshold of computing power; that would include submitting to third-party audits, ensuring safety for whistleblowers, and the threat of government action for non compliance. Developers are also required to include a kill switch in their systems.
“Innovation and safety can go hand in hand — and California is leading the way,” said Senator Scott Wiener, who introduced the bill, in a statement. “With this vote, the Assembly has taken the truly historic step of working proactively to ensure an exciting new technology protects the public interest as it advances.”
While SB 1047 has found some backers in the industry — notably Elon Musk — it’s been opposed by some industry stakeholders leaders in AI development.
Leading academics have also raised concerns, with Stanford’s Dr Fei-Fei Li calling it “well-meaning” but a threat to academic AI research and smaller firms.
“It’s got the most bipartisan, broad opposition I’ve ever seen,” said Martin Casado, general partner at VC firm Andreessen Horowitz, according to Reuters.
The bill has already been amended following pressure from the industry, though Wiener denied criticisms it was watered down.
Unsurprising opposition
It’s no surprise AI regulation is a tough sell. “On the surface, any AI legislation involves some degree of friction between safety and innovation,” notes Aaron Simpson, Partner at Hunton Andrews Kurth LLP.
But Simpson argues that perhaps SB 1047 does go too far. “There is certainly a reasonable argument that this Bill places too much responsibility on AI developers to police innovation that could result in a chilling effect on AI advancement,” he tells ITPro.
“Most agree that accountability for AI is a desirable policy goal – but you can’t achieve that through assigning responsibility to developers alone.”
Indeed, he notes other AI laws have not taken that approach. “Those laws impose requirements on both developers and deployers of AI systems, which will in my view be the more effective path to accountability,” Simpson says.
“The key will be holding all actors in the AI ecosystem to account – which includes not only AI developers but also deployers.”
Different laws across countries and even states could prove problematic, notes Malcolm Ross, SVP for product strategy at Appian — and that’s why regulation should be at the federal level.
“The US government must lead from the top and create comprehensive AI regulations that all states can follow,” Ross says. “A federal framework could preempt conflicting state laws, providing a clear and consistent environment for business.”
Finding a balance
Regulating such an industry is of course complicated, notes Ivana Bartoletti, global chief privacy and AI governance officer at Wipro and co-founder of Women Leading in AI. “This debate underscores the difficulty in finding the right equilibrium between ensuring thorough oversight and fostering innovation,” she says.
This bill has positive aspects, but it isn’t perfect, she says. “In my view, although the bill’s intent to safeguard against the potential dangers of foundational models is commendable, it appears to emphasize the more apocalyptic perspectives on AI,” she says.
She adds: “The bill does indeed possess commendable features, such as its focus on health and safety, the mandatory switch-off capability, and the requirement for external audits. However, it seems to favor large corporations, possibly to the detriment of open-source AI. This concern, especially among startups, has sparked a significant dialogue, though defining open-source remains complex. It might be more pertinent to discuss open-access AI instead.”
Bartoletti says it’s worth noting that this is just one bill of many addressing AI in California, alongside others that tackle privacy, data transparency, copyright and bias.
“This holistic strategy highlights California’s dedication to tackling the diverse challenges presented by AI technology.”
Beyond time to act
That said, surely it’s not too much to ask that those creating these technologies ensure they’re safe, says Jamie Akhtar, Co-Founder and CEO at CyberSmart.
“Although this bill has proven contentious with some parts of the artificial intelligence industry, it does little more than ask vendors to do what they should already be doing – thoroughly testing technology before it’s released and providing safety features such as killswitches,” he tells ITPro.
Indeed, Caroline Carruthers, CEO of Carruthers and Jackson, says the bill is a necessary step towards regulating an industry that has been in dire need of guardrails since the release of ChatGPT.
“The fact that we are seeing a global effort to establish legislation and infrastructure which allows companies to take full advantage of AI’s capabilities, while ensuring safety, security and governance is a positive step in the right direction,” she says.