‘Rise in anti-Semitic and Islamophobic rhetoric on X’ makes the need for further eSafety rules clear, communications minister will say
Get our morning and afternoon news emails, free app or daily news podcast
Social media platforms and tech companies will be required to stamp out harmful material created using artificial intelligence, such as deep fake intimate images and hate speech, under new online safety rules the federal government.
The communications minister, Michelle Rowland, has signalled a suite of changes are needed to respond to “new and emerging harms”, with the Labor government expressing significant concern at widely available generative AI services being used to create offensive or dangerous images, videos and text.
More Stories
Former bosses at video games firm Ubisoft on trial in France accused of sexual harassment
High-rise, high expectations: is Casablanca’s finance hub a model for African development?
‘Nobody wants a robot to read them a story!’ The creatives and academics rejecting AI – at work and at home