‘Rise in anti-Semitic and Islamophobic rhetoric on X’ makes the need for further eSafety rules clear, communications minister will say
Get our morning and afternoon news emails, free app or daily news podcast
Social media platforms and tech companies will be required to stamp out harmful material created using artificial intelligence, such as deep fake intimate images and hate speech, under new online safety rules the federal government.
The communications minister, Michelle Rowland, has signalled a suite of changes are needed to respond to “new and emerging harms”, with the Labor government expressing significant concern at widely available generative AI services being used to create offensive or dangerous images, videos and text.
More Stories
Nigeria sues crypto giant Binance for $81.5bn in economic losses and back tax
‘I felt nothing but disgust’: Tesla owners vent their anger at Elon Musk
Chinese fishing fleets using North Korean forced labour in potential breach of sanctions, report claims