Jan Leike, a key safety researcher at firm behind ChatGPT, quit days after launch of its latest AI model, GPT-4o
A former senior employee at OpenAI has said the company behind ChatGPT is prioritising “shiny products” over safety, revealing that he quit after a disagreement over key aims reached “breaking point”.
Jan Leike was a key safety researcher at OpenAI as its co-head of superalignment, ensuring that powerful artificial intelligence systems adhered to human values and aims. His intervention comes before a global artificial intelligence summit in Seoul next week, where politicians, experts and tech executives will discuss oversight of the technology.
More Stories
Painkillers without the addiction? The new wave of non-opioid pain relief
Trump extends deadline for TikTok sale to non-Chinese buyer to avoid ban
As a geneticist, I will not mourn 23andMe and its jumble of useless health information | Adam Rutherford