‘Rise in anti-Semitic and Islamophobic rhetoric on X’ makes the need for further eSafety rules clear, communications minister will say
Get our morning and afternoon news emails, free app or daily news podcast
Social media platforms and tech companies will be required to stamp out harmful material created using artificial intelligence, such as deep fake intimate images and hate speech, under new online safety rules the federal government.
The communications minister, Michelle Rowland, has signalled a suite of changes are needed to respond to “new and emerging harms”, with the Labor government expressing significant concern at widely available generative AI services being used to create offensive or dangerous images, videos and text.
More Stories
Bodies recovered from illegal goldmine in South Africa where many feared dead
US sues Elon Musk for allegedly failing to disclose early Twitter stock purchase
Meta to fire thousands of staff as Zuckerberg warns of ‘intense year’