Fears that bias in training sets would mean minorities bearing brunt of scams, fraud and misinformation
Detection tools being developed to combat the growing threat of deepfakes – realistic-looking false content – must use training datasets that are inclusive of darker skin tones to avoid bias, experts have warned.
Most deepfake detectors are based on a learning strategy that depends largely on the dataset that is used for its training. It then uses AI to detect signs that may not be clear to the human eye.
More Stories
Microsoft unveils chip it says could bring quantum computing within years
Ex-US security officials urge funding for science research to keep up with China
Virologist Wendy Barclay: ‘Wild avian viruses are mixing up their genetics all the time. It’s like viral sex on steroids’