Jan Leike, a key safety researcher at firm behind ChatGPT, quit days after launch of its latest AI model, GPT-4o
A former senior employee at OpenAI has said the company behind ChatGPT is prioritising “shiny products” over safety, revealing that he quit after a disagreement over key aims reached “breaking point”.
Jan Leike was a key safety researcher at OpenAI as its co-head of superalignment, ensuring that powerful artificial intelligence systems adhered to human values and aims. His intervention comes before a global artificial intelligence summit in Seoul next week, where politicians, experts and tech executives will discuss oversight of the technology.
More Stories
Memo to Trump: US telecoms is vulnerable to hackers. Please hang up and try again | John Naughton
Bizarre Australian mole even more unusual than first thought, new research reveals
Male mosquitoes to be genetically engineered to poison females with semen in Australian research