Sam Altman is among the most vocal supporters of artificial intelligence, but is also leading calls to regulate it. He outlines his vision of a very uncertain future
When I meet Sam Altman, the chief executive of AI research laboratory OpenAI, he is in the middle of a world tour. He is preaching that the very AI systems he and his competitors are building could pose an existential risk to the future of humanity – unless governments work together now to establish guide rails, ensuring responsible development over the coming decade.
In the subsequent days, he and hundreds of tech leaders, including scientists and “godfathers of AI”, Geoffrey Hinton and Yoshua Bengio, as well as Google’s DeepMind CEO, Demis Hassabis, put out a statement saying that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”. It is an all-out effort to convince world leaders that they are serious when they say that “AI risk” needs concerted international effort.
More Stories
Esports are booming in Africa – but can its infrastructure keep pace?
AI learns to distinguish between aromas of US and Scottish whiskies
Man who falsely claimed to be bitcoin creator sentenced for continuing to sue developers