If we are to take seriously the risk facing humanity, regulators need the power to ‘recall’ deployed models, as well as assess leading, not lagging, indicators of risk, writes Prof John McDermid
Re Geoffrey Hinton’s concerns about the perils of artificial intelligence (‘Godfather of AI’ shortens odds of the technology wiping out humanity over next 30 years, 27 December), I believe these concerns can best be mitigated through collaborative research on AI safety, with a role for regulators at the table.
Currently, frontier AI is tested post-development using “red teams” who try their best to elicit a negative outcome. This approach will never be enough; AI needs to be designed for safety and evaluation – something that can be done by drawing on expertise and experience in well-established safety-related industries.
More Stories
Bizarre Australian mole even more unusual than first thought, new research reveals
Male mosquitoes to be genetically engineered to poison females with semen in Australian research
Breakthrough drugs herald ‘new era’ in battle against dementia, experts predict