The standards target generative AI’s misuse potential but Microsoft says its ability to flag problematic material could be hurt too
Follow our Australia news live blog for latest updatesGet our morning and afternoon news emails, free app or daily news podcast
Tech companies say new Australian safety standards will inadvertently make it harder for generative AI systems to detect and prevent online child abuse and pro-terrorism material.
Under two mandatory standards aimed at child safety released in draft form by the regulator last year, the eSafety commissioner, Julie Inman Grant, proposed providers detect and remove child-abuse material and pro-terrorism material “where technically feasible”, as well as disrupt and deter new material of that nature.
More Stories
‘Fruit of the devil’: Hainan’s betel nut sellers suffer from stuttering economy
Couple who ran Swedish eco-resort say 158 barrels of human waste left behind was ‘very normal’
Porsche reports steep fall in orders from Europe and China