‘Rise in anti-Semitic and Islamophobic rhetoric on X’ makes the need for further eSafety rules clear, communications minister will say
Get our morning and afternoon news emails, free app or daily news podcast
Social media platforms and tech companies will be required to stamp out harmful material created using artificial intelligence, such as deep fake intimate images and hate speech, under new online safety rules the federal government.
The communications minister, Michelle Rowland, has signalled a suite of changes are needed to respond to “new and emerging harms”, with the Labor government expressing significant concern at widely available generative AI services being used to create offensive or dangerous images, videos and text.
More Stories
Hyundai facing legal action over car that can be stolen ‘effortlessly in seconds’
Canadian company in negotiations with Trump to mine seabed
China looks south as it seeks to reduce reliance on a capricious United States