Mona Lisa deepfakes developed the the Samsung AI Center in MoscowEgor Zakharov
Advertisement
Samsung AI Center's has a new algorithm that can create a deepfake from just one image.
The algorithm can create 'new view angles' so that even if a certain angle of the face is not within the image data – it will still be able to guess at what the face should look like.
As the threat from fake news and misinformation increases, the awe of development comes with the concern for how such an algorithm can potentially be misused.
People have long wondered what would happen if Mona Lisa could talk and with Samsung's new deepfake tech, they could find out using 'deepfakes'.
A deepfake is basically a way to create videos that look real, but aren't. Multiple images of one face are superimposed onto the source video or image using machine learning (ML). The process is called 'generative adversarial network'.
Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More
So far, the implementation of deepfakes has largely restricted itself to creating revenge porn or funny videos featuring politicians.
Normally developers need a lot of images on hand and a lot of time to produce deepfake videos. Samsung's AI team along with the Skolkovo Institute of Science and Technology in Moscow can do it will just one image – even if its Einstein or the Mona Lisa.
Advertisement
Egor Zakhrov, the lead author of the research, clarifies, "Our system can learn from different number of frames. One-shot learning from a single frame is possible. Of course, increasing the number of frames (images) leads to head models (deepfakes) of higher realism and better identity preservation."
What's the secret?
The reason this new algorithm its able to create deepfakes with just one picture is because it can integrate 'new view angles'. So even if a particular angle of the face isn't within the images or frames that feed into the algorithm, it will still be able to discern what it should look like.
Effectively, the model serves as a realistic avatar of the person.
Egor Zakhrov, lead author of the research
The way that the algorithm trains itself is by tracking a series of landmark facial features – eyes, nose, lips etc. – and then manipulating them to get the desired result using the neural network.
So, when there's one picture, the algorithm's 'neural network' turns landmark facial features into realistic looking moving video.
Advertisement
And, because the algorithm applies itself this way, it reduces the ability of people being able to spot if a video is genuine or a deepfake.
Easier and yet more difficult
It's obvious that the new method simplifies the making of a deep fake. But in a world that already battling the growing threat of deepfakes being used as weapons of misinformation and fake news – it's difficult to see how deepfakes can be a force for good.
Edward Delp, the director of the Video and Imaging Processing Lab at Purdue University remarks, "It's possible that people are going to use fake videos to make fake news and insert these into a political election." He adds, "There's been some evidence of that in other elections throughout the world already."
The team of researchers, on the other hand, see the technology being applied to video conferencing, multi-player games, and the special effects industry.
Advertisement
For movie stars, with hundreds of photographs and video clips already out there in the world, it could mean immortality.
NewsletterSIMPLY PUT - where we join the dots to inform and inspire you. Sign up for a weekly brief collating many news items into one untangled thought delivered straight to your mailbox.