OpenAI has lost another senior AI safety researcher who said the company – along with the world – simply isn’t ready to handle artificial general intelligence (AGI).
Miles Brundage, the company’s senior advisor for AGI readiness, confirmed the move in a Substack post, adding that he plans to start or join a non-profit to focus on AI policy research and advocacy.
“I think AI is unlikely to be as safe and beneficial as possible without a concerted effort to make it so,” he said.
Brundage added that he has not had time to work on various important research topics while at OpenAI, and that it was hard to be impartial within an organization when you work closely with people there every day.
To raise awareness of the world’s lack of readiness, he said he’ll be better employed outside OpenAI, where he can push more effectively for policy changes. In the US, Brundage said Congress should ‘robustly’ fund the US AI Safety Institute to provide greater clarity on AI policy.
“In short, neither OpenAI nor any other frontier lab is ready, and the world is also not ready,” he said.
“I don’t think this is a controversial statement among OpenAI’s leadership, and notably, that’s a different question from whether the company and the world are on track to be ready at the relevant time – though I think the gaps remaining are substantial enough that I’ll be working on AI policy for the rest of my career.”
Brundage joined OpenAI in 2018 and later became the company’s head of policy research before moving to head the company’s AGI readiness team. His main areas of focus were the societal implications of language models and AI agents, frontier AI regulation, and compute governance.
Along with Brundage’s departure comes the disbandment of his AGI readiness team. The Economic Research team, which until recently was a sub-team of AGI Readiness led by Pamela Mishkin, will be moving under Ronnie Chatterji, OpenAI’s new chief economist. The rest of the group will be split up among other teams.
OpenAI’s exodus continues
Brundage’s exit is just the latest of a series of high-profile departures from OpenAI. Last month, chief technology Officer Mira Murati, chief research officer Bob McGrew, and vice president of research Barret Zoph all announced they were leaving.
In May, co-founder Ilya Sutskever and head of safety Jan Leike took their leave, while co-founder John Schulman said in August that he was leaving to join rival Anthropic.
Sutskever has since co-founded his own venture, Safe Superintelligence (SSI), aimed specifically at developing safe and responsible AI systems. In September, the company raised $1 billion in a funding round that saw investment from Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel.
OpenAI’s push toward AGI and concerns over AI safety have been a recurring talking point over the last year, and a key factor in the infamous boardroom coup that saw Sam Altman temporarily ousted in November 2023.
This summer, whistleblowers at the company reportedly filed a complaint with the US Securities and Exchange Commission (SEC) alleging that the firm had illegally banned its employees from warning regulators about the risks its technology could pose to humanity.
Source link