If we are to take seriously the risk facing humanity, regulators need the power to ‘recall’ deployed models, as well as assess leading, not lagging, indicators of risk, writes Prof John McDermid
Re Geoffrey Hinton’s concerns about the perils of artificial intelligence (‘Godfather of AI’ shortens odds of the technology wiping out humanity over next 30 years, 27 December), I believe these concerns can best be mitigated through collaborative research on AI safety, with a role for regulators at the table.
Currently, frontier AI is tested post-development using “red teams” who try their best to elicit a negative outcome. This approach will never be enough; AI needs to be designed for safety and evaluation – something that can be done by drawing on expertise and experience in well-established safety-related industries.
More Stories
I became absorbed in strangers’ fertility journeys online
Virologist Wendy Barclay: ‘Wild avian viruses are mixing up their genetics all the time. It’s like viral sex on steroids’
Microsoft unveils chip it says could bring quantum computing within years