Jan Leike, a key safety researcher at firm behind ChatGPT, quit days after launch of its latest AI model, GPT-4o
A former senior employee at OpenAI has said the company behind ChatGPT is prioritising “shiny products” over safety, revealing that he quit after a disagreement over key aims reached “breaking point”.
Jan Leike was a key safety researcher at OpenAI as its co-head of superalignment, ensuring that powerful artificial intelligence systems adhered to human values and aims. His intervention comes before a global artificial intelligence summit in Seoul next week, where politicians, experts and tech executives will discuss oversight of the technology.
More Stories
Researchers create AI-based tool that restores age-damaged artworks in hours
Australia has ‘no alternative’ but to embrace AI and seek to be a world leader in the field, industry and science minister says
Tell us your favourite video game of 2025 so far