Networks in China and Iran also used AI models to create and post disinformation but campaigns did not reach large audiences
OpenAI on Thursday released its first ever report on how its artificial intelligence tools are being used for covert influence operations, revealing that the company had disrupted disinformation campaigns originating from Russia, China, Israel and Iran.
Malicious actors used the company’s generative AI models to create and post propaganda content across social media platforms, and to translate their content into different languages. None of the campaigns gained traction or reached large audiences, according to the report.
More Stories
The Cybertruck was supposed to be apocalypse-proof. Can it even survive a trip to the grocery store?
US tech firms secure AI deals as Trump tours Gulf states
You might live to be 100. Are you ready? | Andrew J Scott