The phenomenon, known as deepfaking, uses artificial intelligence to map a real person’s likeness onto another video. The results can be uncannily realistic — so much so that a recent study found up to half of viewers couldn't distinguish
Some of the
John Cormack, a retired doctor from Essex, partnered with The BMJ to uncover the extent of this digital deception. “The bottom line is, it's much cheaper to spend your cash on making videos than it is on doing research and coming up with new products and getting them to market in the conventional way,” Cormack explains.
The proliferation of fake content featuring familiar faces is an inevitable side effect of our current AI revolution, says Henry Ajder, a deepfake technology expert. “The rapid democratisation of accessible AI tools for voice cloning and avatar generation has transformed the
The issue has reached such proportions that even the targeted doctors are fighting back. Hilary Jones, for instance, employs a social media specialist to search for and take down
Meta, the company behind Facebook and Instagram where many of these videos have been found, has promised to investigate. "We don't permit content that intentionally deceives or seeks to defraud others, and we're constantly working to improve detection and enforcement," a Meta spokesperson told The BMJ.
Deepfakes prey on people's emotions, notes journalist Chris Stokel-Walker. When a trusted figure endorses a product, viewers are more likely to believe in its efficacy. This emotional manipulation is precisely what makes deepfakes so insidious.
Spotting deepfakes has become increasingly challenging as the technology improves. Additionally, the recent tsunami of non-consensual deepfake videos would suggest that they might be having some commercial success, despite being illegal.
For those who find their likenesses being used without consent, there seems to be little recourse. However, Stokel-Walker offers some advice: scrutinise the content for telltale signs of fakery, leave a comment questioning its authenticity, use the platform's reporting tools, and report the account responsible for sharing the post.
As AI continues to blur the lines between reality and digital deception, it's crucial for users to remain vigilant. The faces we trust most could be the very ones leading us astray — at least, digitally speaking.
The findings of this research can be accessed here.