Fears that bias in training sets would mean minorities bearing brunt of scams, fraud and misinformation
Detection tools being developed to combat the growing threat of deepfakes – realistic-looking false content – must use training datasets that are inclusive of darker skin tones to avoid bias, experts have warned.
Most deepfake detectors are based on a learning strategy that depends largely on the dataset that is used for its training. It then uses AI to detect signs that may not be clear to the human eye.
More Stories
Elon Musk shows he still has the White House’s ear on Trump’s Middle East trip
Australians may soon be able to download iPhone apps from outside Apple App Store under federal proposal
Chris Hadfield: ‘Worst space chore? Fixing the toilet. It’s even worse when it’s weightless’