Fears that bias in training sets would mean minorities bearing brunt of scams, fraud and misinformation
Detection tools being developed to combat the growing threat of deepfakes – realistic-looking false content – must use training datasets that are inclusive of darker skin tones to avoid bias, experts have warned.
Most deepfake detectors are based on a learning strategy that depends largely on the dataset that is used for its training. It then uses AI to detect signs that may not be clear to the human eye.
More Stories
Painkillers without the addiction? The new wave of non-opioid pain relief
Trump extends deadline for TikTok sale to non-Chinese buyer to avoid ban
As a geneticist, I will not mourn 23andMe and its jumble of useless health information | Adam Rutherford