Networks in China and Iran also used AI models to create and post disinformation but campaigns did not reach large audiences
OpenAI on Thursday released its first ever report on how its artificial intelligence tools are being used for covert influence operations, revealing that the company had disrupted disinformation campaigns originating from Russia, China, Israel and Iran.
Malicious actors used the company’s generative AI models to create and post propaganda content across social media platforms, and to translate their content into different languages. None of the campaigns gained traction or reached large audiences, according to the report.
More Stories
Male mosquitoes to be genetically engineered to poison females with semen in Australian research
Memo to Trump: US telecoms is vulnerable to hackers. Please hang up and try again | John Naughton
Bizarre Australian mole even more unusual than first thought, new research reveals