The US government wants technology companies building AI and cloud computing services to prove their systems are safe — and report on their capabilities to avoid them being misused.
The Department of Commerce has proposed a rule that would require companies with powerful AI models and compute clusters to provide detailed reports to the federal government.
According to the Bureau of Industry and Security (BIS), the aim is to assess security measures, but also to keep watch over the “defense-relevant capabilities” of these frontier technologies.
“As AI is progressing rapidly, it holds both tremendous promise and risk,” said Secretary of Commerce Gina Raimondo. “This proposed rule would help us keep pace with new developments in AI technology to bolster our national defense and safeguard our national security.”
Regulation and reporting are common government responses to new technologies, said Alex Kearns, Head of Consulting at Ubertas Consulting.
“The US Government’s proposal appears to focus reporting requirements on the providers of AI capability (i.e. model developers) rather than the consumers of the models,” he told ITPro.
“The regulation currently in place in the EU (EU Artificial Intelligence Act) focuses more on how organizations are using AI and whether it is responsible and fair. Both aspects are important to regulate.”
Dual-use danger
The BIS reporting would include providing information on development activities and security measures, in particular the outcomes of red-teaming. That generally refers to testing an organization’s response to an attack, in order to improve its protections.
However, BIS flagged that it would like to see the results of testing for dangerous abilities in the technologies themselves, such as the ability to assist in cyber attacks and whether this could make it easier for “non experts” to develop serious weapons, such as chemical, biological, and nuclear.
The long-term aim is to ensure “dual-use” foundation models can’t be misused by foreign adversaries or non-state actors, BIS added.
Under the proposals, a “dual-use foundation model” would be defined as one trained on broad data against tens of billions of parameters, that is applicable across multiple contexts, and could therefore be modified to complete tasks that may pose a risk to security, be it public safety concerns or economic matters.
But the government also wants to know how those systems could benefit its defense efforts, too.
Wise move or government interference?
Crystal Morin, cybersecurity strategist at Sysdig, said companies should already be considering these ideas, meaning reporting that information shouldn’t be a “challenge”.
“For advanced technologies that have huge potential like AI, we should think about misuse or potential security risks right from the start,” she told ITPro.
“This legislation will encourage companies to be upfront and honest about their security practices and promote a secure-first approach to software design lifecycles, building advanced technology with responsible security in mind from the get-go.
“In these situations, I think of Jeff Goldblum’s character in Jurassic Park: ‘Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should’.”
But Kashif Nazir, technical manager at Cloudhouse, warned that smaller companies could feel the weight of regulatory reporting more heavily.
“While these measures tackle important security concerns, they could come at the cost of slowing down innovation, particularly for smaller companies that might struggle with the regulatory burden,” he said.
Who will be subject to reporting rules?
The proposed rule specifically applies to US-based companies that are developing – or planning to develop – large dual-use foundation models, or have the significant computing hardware necessary to do so.
If approved as written, the rule would require companies to report quarterly on their development and training efforts, including security practices to protect that work, in particular around model weights and red-team testing.
Companies will have 30 days from publication of the proposed rule, expected this week, to respond with any comments. BIS has already run a pilot project earlier this year.
The move is in response to an executive order signed by President Biden last year, looking to ensure AI is developed safely.
“This proposed reporting requirement would help us understand the capabilities and security of our most advanced AI systems,” said Under Secretary of Commerce for Industry and Security Alan F. Estevez.
“It would build on BIS’s long history conducting defense industrial base surveys to inform the American government about emerging risks in the most important US industries.”
Source link