Safety groups say images are so lifelike that it can be hard to see if real children were subject to harms in production
The volume of sexually explicit images of children being generated by predators using artificial intelligence is overwhelming law enforcement’s capabilities to identify and rescue real-life victims, child safety experts warn.
Prosecutors and child safety groups working to combat crimes against children say AI-generated images have become so lifelike that in some cases it is difficult to determine whether real children have been subjected to real harms for their production. A single AI model can generate tens of thousands of new images in a short amount of time, and this content has begun to flood both the dark web and seep into the mainstream internet.
More Stories
Bankrupt DNA testing firm 23andMe to be purchased for $256m
Sea level rise will cause ‘catastrophic inland migration’, scientists warn
Bees face new threats from wars, street lights and microplastics, scientists warn