scorecard
  1. Home
  2. tech
  3. Samsung's new 'deepfake' tech can make Einstein and Mona Lisa come back to life

Samsung's new 'deepfake' tech can make Einstein and Mona Lisa come back to life

Samsung's new 'deepfake' tech can make Einstein and Mona Lisa come back to life

  • Samsung AI Center's has a new algorithm that can create a deepfake from just one image.
  • The algorithm can create 'new view angles' so that even if a certain angle of the face is not within the image data – it will still be able to guess at what the face should look like.
  • As the threat from fake news and misinformation increases, the awe of development comes with the concern for how such an algorithm can potentially be misused.
People have long wondered what would happen if Mona Lisa could talk and with Samsung's$4, they could find out using 'deepfakes'.


A deepfake is basically a way to create videos that look real, but aren't. Multiple images of one face are superimposed onto the source video or image using machine learning (ML). The process is called 'generative adversarial network'.

So far, the implementation of deepfakes has largely restricted itself to creating $4 or $4 featuring politicians.

Normally developers need a lot of images on hand and a lot of time to produce deepfake videos. Samsung's AI team along with the Skolkovo Institute of Science and Technology in Moscow can do it will just one image – even if its Einstein or the Mona Lisa.


Egor Zakhrov, the$4 of the research, clarifies, "Our system can learn from different number of frames. One-shot learning from a single frame is possible. Of course, increasing the number of frames (images) leads to head models (deepfakes) of higher realism and better identity preservation."

What's the secret?

The reason this new algorithm its able to create deepfakes with just one picture is because it can integrate 'new view angles'. So even if a particular angle of the face isn't within the images or frames that feed into the algorithm, it will still be able to discern what it should look like.


The way that the algorithm trains itself is by tracking a series of landmark facial features – eyes, nose, lips etc. – and then manipulating them to get the desired result using the neural network.

So, when there's one picture, the algorithm's 'neural network' turns landmark facial features into realistic looking moving video.

And, because the algorithm applies itself this way, it reduces the ability of people being able to spot if a video is genuine or a deepfake.

Easier and yet more difficult

It's obvious that the new method simplifies the making of a deep fake. But in a world that already battling the growing threat of deepfakes being used as weapons of misinformation and fake news – it's difficult to see how deepfakes can be a force for good.

Edward Delp, the director of the Video and Imaging Processing Lab at Purdue University remarks, "It's possible that people are going to use fake videos to make fake news and insert these into a political election." He adds, "There's been some evidence of that in other elections throughout the world already."

The team of researchers, on the other hand, see the technology being applied to video conferencing, multi-player games, and the special effects industry.

For movie stars, with hundreds of photographs and video clips already out there in the world, it could mean immortality.

See also:
$4

$4

$4

READ MORE ARTICLES ON



Popular Right Now



Advertisement