Networks in China and Iran also used AI models to create and post disinformation but campaigns did not reach large audiences
OpenAI on Thursday released its first ever report on how its artificial intelligence tools are being used for covert influence operations, revealing that the company had disrupted disinformation campaigns originating from Russia, China, Israel and Iran.
Malicious actors used the company’s generative AI models to create and post propaganda content across social media platforms, and to translate their content into different languages. None of the campaigns gained traction or reached large audiences, according to the report.
More Stories
Microsoft unveils chip it says could bring quantum computing within years
Virologist Wendy Barclay: ‘Wild avian viruses are mixing up their genetics all the time. It’s like viral sex on steroids’
Shortsighted Taiwan may have lessons for the world as a preventable disease skyrockets