A number of senior AI safety research personnel at OpenAI, the organisation behind ChatGPT, have left the company. This wave of resignations often cites shifts within company culture, and a lack of investment in AI safety as reasons for leaving.
To put it another way, though the ship may not be taking on water, the safety team are departing in their own little dinghy, and that is likely cause for some concern.
The most recent departure is Rosie Campbell, who previously led the Policy Frontiers team. In a post on her personal substack (via Tweak Town) Campbell shared the final message she sent to her colleagues in Slack, writing that though she has «always been strongly driven by the mission of ensuring safe and beneficial [Artificial General Intelligence],» she now believes that she «can pursue this more effectively externally.»
What is artificial general intelligence?: We dive into the lingo of AI and what the terms actually mean.
Campbell highlights «the dissolution of the AGI Readiness team» and the departure of Miles Brundage, another AI safety researcher, as specific factors that informed her decision to leave.
Campbell and Brundage had previously worked together at OpenAI on matters of «AI governance, frontier policy issues, and AGI readiness.»
Brundage himself also shared a few of his reasons for parting ways with OpenAI in a post to his Substack back in October. He writes, «I think AI is unlikely to be as safe and beneficial as possible without a concerted effort to make it so.» Previously serving as a Senior Advisor for AGI Readiness, he shares, «I think I can be more effective externally.»
This comes mere months after Jan Leike's resignation as co-lead of OpenAI's Superalignment team. This team was tasked with tackling the problem of ensuring that AI systems potentially more intelligent than humans still act in accordance with human values—and they were expected to solve this problem within the span of four years. Talk about a deadline.
Keep up to date
Read more on pcgamer.com