OpenAI and Anthropic have agreed let the US government access major new AI models before release to help improve their safety.
The companies signed memorandums of understanding with the US AI Safety Institute to provide access to the models both before and after their public release, the agency announced Thursday. The government says this step will help them work together to evaluate safety risks and mitigate potential issues. The US agency said it would provide feedback on safety improvements, in collaboration with its counterpart agency in the UK.
Sharing access to AI models is a significant step at a time when federal and state legislatures are considering what kinds of guardrails to place on the technology without stifling innovation. On Wednesday, California lawmakers passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), requiring AI companies in California to take specific safety measures before training advanced foundation models. It’s garnered pushback from AI companies including OpenAI and Anthropic who warn it could harm smaller open-source developers, though it’s since undergone some changes and is still awaiting a signature from Governor Gavin Newsom.
US AI Safety Institute director Elizabeth Kelly said in a statement that the new agreements were “just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”
Source link