These trippy images show how Google's AI sees the world

Advertisement

Engineers trained the network by "showing it millions of training examples and gradually adjusting the network parameters,"according to Google’s research blog. The image below was produced by a network that was taught to look for animals.

Engineers trained the network by "showing it millions of training examples and gradually adjusting the network parameters,"according to Google’s research blog. The image below was produced by a network that was taught to look for animals.

Advertisement

Each of Google's AI networks is made of a hierarchy of layers, usually about "10 to 30 stacked layers of artificial neurons." The first layer, called the input layer, can detect very basic features like the edges of objects. The engineers found that this layer tended to produce strokes and swirls in objects, as in the image of a pair of ibis below.

Each of Google's AI networks is made of a hierarchy of layers, usually about "10 to 30 stacked layers of artificial neurons." The first layer, called the input layer, can detect very basic features like the edges of objects. The engineers found that this layer tended to produce strokes and swirls in objects, as in the image of a pair of ibis below.
Advertisement

As an image progresses through each layer, the network will look for more complicated structures, until the final layer makes a decision about the objects in the image. This AI searched for animals in a photo of clouds in a blue sky and ended up creating animal hybrids.

As an image progresses through each layer, the network will look for more complicated structures, until the final layer makes a decision about the objects in the image. This AI searched for animals in a photo of clouds in a blue sky and ended up creating animal hybrids.

Here’s the same image of a blue sky put through a network trained to search for buildings, specifically pagodas. Trippy!

Here’s the same image of a blue sky put through a network trained to search for buildings, specifically pagodas. Trippy!

RAW Embed

Advertisement

These examples show how AI networks trained to recognize towers, buildings, and birds interpreted images of landscapes, trees, and leaves.

These examples show how AI networks trained to recognize towers, buildings, and birds interpreted images of landscapes, trees, and leaves.

The engineers also found that the AI were able to generate, or "see" objects in images of static noise.

The engineers also found that the AI were able to generate, or "see" objects in images of static noise.
Advertisement

Google's engineers used this process to verify that the AI were correctly learning the right features of the objects they were meant to learn. It's hard to tell what this AI was looking for, cupcakes, flowers or oranges.

Google's engineers used this process to verify that the AI were correctly learning the right features of the objects they were meant to learn. It's hard to tell what this AI was looking for, cupcakes, flowers or oranges.

AI trained to identify places and building features spat out the trippiest images. Like these blue and green arches.

AI trained to identify places and building features spat out the trippiest images. Like these blue and green arches.
Advertisement

Or this landscape that features arches, fountains, a bus, and red phone booths.

Or this landscape that features arches, fountains, a bus, and red phone booths.

The engineers found that the AI tended to populate specific features with the same object. For example, horizons tended "to get filled with towers and pagodas" and "rocks and trees turn into buildings."

The engineers found that the AI tended to populate specific features with the same object. For example, horizons tended "to get filled with towers and pagodas" and "rocks and trees turn into buildings."
Advertisement

A smiley face emerges from a background of seemingly random arches and circles.

A smiley face emerges from a background of seemingly random arches and circles.

The AI also produced strange images reminicent of the early-90s "magic-eye" books — just from static noise. Look closely and you might be able to find something here.

The AI also produced strange images reminicent of the early-90s "magic-eye" books — just from static noise. Look closely and you might be able to find something here.
Advertisement

One AI network produced an incredibly nightmarish image of disembodied eyes.

One AI network produced an incredibly nightmarish image of disembodied eyes.

One AI network turned an image of a red tree into a tapestry of dogs, birds, cars, buildings and bikes.

One AI network turned an image of a red tree into a tapestry of dogs, birds, cars, buildings and bikes.

Advertisement

The engineers also fed works of art to the AI artworks. One network superimposed a dog on the screaming figure in “The Scream,” a painting by Edvard Munch.

The engineers also fed works of art to the AI artworks. One network superimposed a dog on the screaming figure in “The Scream,” a painting by Edvard Munch.

The majority of the AI networks were trained with images of animals. One AI network populated an image of a waterfall with dogs, birds, pigs and goats.

The majority of the AI networks were trained with images of animals. One AI network populated an image of a waterfall with dogs, birds, pigs and goats.
Advertisement

The engineers believe that "inceptionism" may inspire artists to use AI to as a "new way to remix visual concepts."

The engineers believe that "inceptionism" may inspire artists to use AI to as a "new way to remix visual concepts."

Since AI is so creepily creative, why wouldn't it then judge our artwork?

Since AI is so creepily creative, why wouldn't it then judge our artwork?

The 20 most creative paintings ever — according to a computer >

Advertisement