Google researchers discover new algorithm that can render 3D scenes just by a few snapshots

Advertisement
Google researchers discover new algorithm that can render 3D scenes just by a few snapshots

Advertisement
  • The new tech can render a 3D model with just five snapshots.
  • This algorithm can identify an object’s shape, size and colour in a given scene.
  • Once it is perfected, the need to label images for AI training will be eliminated.
Researchers at Google recently developed a new AI algorithm called the Generative Query Network (GQN), that will further reduce the gap between humans, computers and how they see things.

We all can perceive our surroundings by just a look, and now computers will be able to accomplish the same. This new type of artificial intelligence algorithm can figure out how an object looks from all angles using visual sensors, it does not need to see the object from all angles to learn.

So, how does the technology work?
Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

First, the computer is introduced into an environment and is allowed to take some snaps from a few different perspectives. After this, the GQN takes over and pieces together an object's appearance and creates an abstract of the scene to learn the essentials. And based on what it ‘learns’, the GQN predicts what the object would look like from another angle, one that is not included in the snapshots.

What can the new algorithm do?
Advertisement


According to the published research, the AI system can fully render a 3D scene based on just five separate virtual images. The algorithm can identify shapes, sizes and colours of all the objects in the scene and then integrate them to create an accurate 3D model.

With the help of one rendered scene, researchers can create entirely new scenes without the need to explicitly lay out which objects needs to go where. And, all this is done without any human assistance, supervision or training.

Advantages over existing technology

Earlier, in order to enable any AI model to perceive an object, the model needed to be trained with a set of images both of the object and of scenes that do not have the object in it. And the more difficult part of the process was that all these images needed to be manually labelled by humans - and AI training means many, many, MANY images.

The new algorithm can help generate training images (with and without the object and label them) and the need of human intervention is reduced to zero. Think of all the money and time you can save.
Advertisement

This algorithm, when applied to advanced technologies, can help extend the machine learning ability of robots and cognitive devices (including ones put to military use) giving them a greater perception and awareness about their surroundings.

Google scientists also think that this can give rise to machines that can autonomously learn about their surroundings without any help.

Currently, the GQN is not refined enough to be thrown open to the gallery and is being used to train AI models and work on their precision

All these findings were published in the journal Science.
{{}}