Studies in which ChatGPT outperformed scientists and GPs raise troubling questions for the future of professional work
AI means too many (different) things to too many people. We need better ways of talking – and thinking – about it. Cue, Drew Breunig, a gifted geek and cultural anthropologist, who has come up with a neat categorisation of the technology into three use cases: gods, interns and cogs.
“Gods”, in this sense, would be “super-intelligent, artificial entities that do things autonomously”. In other words, the AGI (artificial general intelligence) that OpenAI’s Sam Altman and his crowd are trying to build (at unconscionable expense), while at the same time warning that it could be an existential threat to humanity. AI gods are, Breunig says, the “human replacement use cases”. They require gigantic models and stupendous amounts of “compute”, water and electricity (not to mention the associated CO2 emissions).
More Stories
Esports are booming in Africa – but can its infrastructure keep pace?
Man who falsely claimed to be bitcoin creator sentenced for continuing to sue developers
AI learns to distinguish between aromas of US and Scottish whiskies