Studies in which ChatGPT outperformed scientists and GPs raise troubling questions for the future of professional work
AI means too many (different) things to too many people. We need better ways of talking – and thinking – about it. Cue, Drew Breunig, a gifted geek and cultural anthropologist, who has come up with a neat categorisation of the technology into three use cases: gods, interns and cogs.
“Gods”, in this sense, would be “super-intelligent, artificial entities that do things autonomously”. In other words, the AGI (artificial general intelligence) that OpenAI’s Sam Altman and his crowd are trying to build (at unconscionable expense), while at the same time warning that it could be an existential threat to humanity. AI gods are, Breunig says, the “human replacement use cases”. They require gigantic models and stupendous amounts of “compute”, water and electricity (not to mention the associated CO2 emissions).
More Stories
I became absorbed in strangers’ fertility journeys online
Virologist Wendy Barclay: ‘Wild avian viruses are mixing up their genetics all the time. It’s like viral sex on steroids’
Microsoft unveils chip it says could bring quantum computing within years