The advanced large language model’s ability to reason is a feather in the cap for developers of generative AI technology
In 2017, researchers at the British AI company DeepMind (now Google DeepMind) published an extraordinary paper describing how their new algorithm, AlphaZero, had taught itself to play a number of games to superhuman standards without any instruction. The machine could, they wrote, “achieve, tabula rasa, superhuman performance in many challenging domains. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case.”
Speaking afterwards at a big machine-learning conference, DeepMind’s chief executive, Demis Hassabis (himself a world-class chess player), observed that the program often made moves that would seem unthinkable to a human chess player. “It doesn’t play like a human,” he said, “and it doesn’t play like a program. It plays in a third, almost alien, way.” It would be an overstatement to say that AlphaZero’s capabilities spooked those who built it, but it clearly surprised some of them. It was, one (privately) noted later, a bit like putting your baby daughter to sleep one evening and finding her solving equations in the morning.
More Stories
‘Wild west’: experts concerned by illegal promotion of weight-loss jabs in UK
Esports are booming in Africa – but can its infrastructure keep pace?
Shrinking waistlines and growing profits: the weight-loss drug boom