‘Rise in anti-Semitic and Islamophobic rhetoric on X’ makes the need for further eSafety rules clear, communications minister will say
Get our morning and afternoon news emails, free app or daily news podcast
Social media platforms and tech companies will be required to stamp out harmful material created using artificial intelligence, such as deep fake intimate images and hate speech, under new online safety rules the federal government.
The communications minister, Michelle Rowland, has signalled a suite of changes are needed to respond to “new and emerging harms”, with the Labor government expressing significant concern at widely available generative AI services being used to create offensive or dangerous images, videos and text.
More Stories
Porsche reports steep fall in orders from Europe and China
‘Fruit of the devil’: Hainan’s betel nut sellers suffer from stuttering economy
Trump confirms 104% tariffs on Chinese goods as part of unfolding global trade war