Google is combining still images from Street View to animate real life

Advertisement

Advertisement
google street view

Christian Charisius/Reuters

In the not-so-distant future, a new deep learning algorithm from Google could change the worlds of virtual reality and film.  

Today, directors and developers can only create a shot based on how many cameras they have set up at different angles. Bringing deep learning into the field - turning limited information into real-world structures, essentially with educated guesses - could help them achieve otherwise impossible vantage points.

The algorithm, developed using still images from Google Street View, uses multiple pictures in order to "learn" how the images should fit together.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

It can then assemble the images into a single fluid animation, as if the viewer were watching a movie.

Developers call it DeepStereo. Have a look.

Advertisement

The algorithm takes each pixel in a given picture and compares the colors and depth to the corresponding pixels in related images.

Using five different vantage points, it then recreates a full picture of the world. The upper righthand corner shows the images collected by Street View.

Sometimes it still has trouble putting the pieces together. As you can see here, in the algorithm's capture of the Acropolis Museum, a sculpture on the lefthand side comes into view piece by piece.

 Google's developers say it is a known problem that will work itself out once they can incorporate more vantage points into their existing model.

The technology could eventually be used in cinematography, teleconferencing, virtual reality, and image stabilization, the researchers write in their report on the algorithm.

Advertisement

"To our knowledge, our work is the first to apply deep learning to the problem of new view synthesis from sets of real-world, natural imagery," they explain.

Whether that breakthrough causes motion sickness will be another matter. San Francisco's Lombard St., seen in the GIF below, isn't for the faint of stomach.

?? 

NOW WATCH: Here are all of Google's awesome science projects - that we know about