OpenAI pledges support for AI watermarking rules


AI firms have backed a California bill that would require AI generated content to be watermarked, while a second bill focused on safety is less popular — but found support from Elon Musk. 

There are two of several dozen bills introduced in California — many of which have already been dropped — that would seek to regulate AI. The first is AB 3211, which would force tech companies to label content that was generated by AI. The second, SB 1047, would require AI developers to test their models for safety.

The latter bill has been more controversial, no surprise as it would require companies making AI to take responsibility for the safety of their systems.

The former is easier to implement but could prove helpful in stemming the flow of misinformation — currently at the forefront in the US ahead of presidential elections.

AI safety testing

SB 1047 would require developers spending more than $100 million to create an AI model to run safety tests, submit themselves to external audits and implement guardrails including kill switches, or risk regulatory action. That bill has found less support among the tech industry — but does now boast high-profile backers. 

In a post on X, Musk said California should “probably pass” the SB 1047 bill. “For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public.”

Musk is a bit of an outsider in the AI industry. One of OpenAI’s co-founders, Musk is now suing the company after it shifted from its initial structure as a non-profit. He has set up his own AI company, xAI.

The bill has also found support from Geoffery Hinton, the “godfather of AI” who last year quit Google amid his own concerns about the fast pace of development.

As with the watermarking bill, SB 1047 has been hit with a series of amendments following industry pressure. Earlier this month, Scott Wiener, the senator who introduced SB 1047, announced it had passed a key committee stage — but with a large number of changes influenced by industry players, notably Anthropic.

Wiener admitted at the time: “While the amendments do not reflect 100% of the changes requested by Anthropic—a world leader on both innovation and safety—we accepted a number of very reasonable amendments proposed.”

The changes included dropping criminal penalties for perjury in favour of civil ones, and reducing the ability of local regulators to see penalties unless harm has occurred or there’s an imminent threat to public safety. Critics said the amendments amounted to “watering down” the regulations.

SB 1047 will now be voted on by the full state assembly by the end of this month; if it passes, Governor Gavin Newsom must sign it in or choose to veto it by the end of September.

Watermarking AI

AB 3211 has already passed the first two rounds of vote approvals, and, as with the other bill, now must be considered by the state Senate by the end of this month and then signed into law by Governor Gavin Newsom by the end of September. 

“New technology and standards can help people understand the origin of content they find online, and avoid confusion between human-generated and photorealistic AI-generated content,” OpenAI Chief Strategy Officer Jason Kwon wrote in a letter sent to the bill’s author, according to Reuters.

The watermarking bill had previously found opposition in tech, notably from industry body the Software Alliance (BSA, formerly the Business Software Alliance”, which counts among its members Microsoft, OpenAI and Adobe. 

The BSA wrote to Californian regulators earlier this year saying AB 3211 was “unworkable”, in part because of its tight notification timelines for watermark failures, though the letter stressed the industry wasn’t wholly against the idea of watermarks as one aspect of transparency.

The bill has since been amended to drop such notifications. Reports suggest that beyond OpenAI, Microsoft and Adobe both now support that bill.




Source link

Exit mobile version