Exclusive: eSafety Commissioner says companies must work on building tools to promote greater online safety, including detecting deep fake images
Get our morning and afternoon news emails, free app or daily news podcast
Artificial intelligence tools could be used to generate child abuse images and terrorist propaganda, Australia’s eSafety Commissioner has warned while announcing a world-leading industry standard that requires tech giants to stamp out such material on AI-powered search engines.
The new industry code covering search engines, to be detailed on Friday, requires big tech firms like Google, Microsoft’s Bing and DuckDuckGo to eliminate child abuse material from their search results, and to take steps to ensure generative AI products can’t be used to generate deep fake versions of that material.
More Stories
EU calls for lower price cap on Russian oil in move to tighten sanctions
‘We won’t give up’: Kumanjayi White’s family hold vigil demanding independent investigation
Expanded ‘Jack’s law’ police powers could lead to further ‘surveillance and harassment’ of some Queenslanders, expert warns