Fears that bias in training sets would mean minorities bearing brunt of scams, fraud and misinformation
Detection tools being developed to combat the growing threat of deepfakes – realistic-looking false content – must use training datasets that are inclusive of darker skin tones to avoid bias, experts have warned.
Most deepfake detectors are based on a learning strategy that depends largely on the dataset that is used for its training. It then uses AI to detect signs that may not be clear to the human eye.
More Stories
Memo to Trump: US telecoms is vulnerable to hackers. Please hang up and try again | John Naughton
The anxiety secret: how the world’s leading life coach stopped living in fear
How to deal with Zoom calls in 2025: in smaller groups with static backgrounds