Facebook has released AI code that can help machines to 'see'

Advertisement

Mark Zuckerberg

Kevork Djansezian/Getty Images

Facebook has released a bunch of code on code repository GitHub that developers can use to programme machines in a way that allows them to "see" what they're looking at.

The Silicon Valley social media giant hopes that open sourcing its machine vision algorithms will help to accelerate developments in the field.

The code dump, announced in a Facebook blog on Thursday, includes three data sets: DeepMask, SharpMask, and MultiPathNet.

"Together, they have enabled FAIR's (Facebook's AI Research) machine vision systems to detect and precisely delineate every object in an image," wrote Piotr Dollar, an AI research scientist at Facebook, in the blog post. "We're making the code for DeepMask, SharpMask as well as MultiPathNet - along with our research papers and demos related to them - open and accessible to all, with the hope that they'll help rapidly advance the field of machine vision."

DeepMask determines if there's an object in the image, SharpMask describes the objects and MultiPathNet attempts to identify what they are, according to Endgadget.

Machines have traditionally struggled to understand what they're looking at in an image but Facebook believes there are "wide-ranging potential uses for visual recognition technology" including helping blind people to know what is in a photo. Some computers can now tell whether they're looking at a dog or a human but it's still incredibly difficult for computers to tell what type of dog they're looking at and describe it to someone.

Releasing AI code in this way is something that Google also does but other companies like Apple and Amazon prefer to work independently on their AI research efforts.

NOW WATCH: Amazon has an oddly efficient way of storing stuff in its warehouses