The most artificial intelligence is still built on a foundation of human toil. Look at an AI algorithm and you will find something that was compiled using data and compiled by an army of human workers.
Now Facebook has shown how some AI algorithms can learn to do useful work with much less human help. The company built an algorithm that learned to recognize objects in images with little help from labels.
The Facebook algorithm, called Seer (for SElf-supERvised), scraped more than a billion images from Instagram and decided for itself which objects looked alike. Images with mustache, fur and pointed ears, for example, were collected in one stack. Thereafter, the algorithm obtained a small number of captioned images, including ‘cats’. It could then recognize images as well as an algorithm trained using thousands of named examples of each object.
“The results are impressive,” he says Olga Russakovsky, an assistant professor at Princeton University specializing in AI and computer vision. “Learning self-study work is very challenging, and breakthroughs in this space have important implications downstream for improved visual recognition.”
Russakovsky says it is noteworthy that the Instagram images were not hand-picked to facilitate independent learning.
The Facebook research is a beacon for an AI approach, known as ‘self-supervised learning’, says Facebook’s lead scientist, Yann LeCun.
LeCun was a pioneer in the machine learning approach, known as deep learning, which involves the input of data to large artificial neural networks. About a decade ago, deep learning emerged as a better way to program machines to do all sorts of useful things, such as image classification and speech recognition.
However, LeCun says that the conventional approach, which ‘must’ train ‘an algorithm by entering a lot of labeled data, will simply not increase. “I have been advocating for some time for this whole idea of supervised learning,” he says. “In the long run, progress in AI comes from programs that watch videos all day and learn like a baby.”
LeCun says that supervised learning can have many useful applications, for example learning to read medical images without naming as many scans and x-rays. He says a similar approach is already being used to automatically generate hashtags for Instagram images. And he says the Seer technology on Facebook can be used to match ads to posts or to filter out unwanted content.
Facebook research builds on steady progress in adapting deep-learning algorithms to make them more efficient and effective. Self-study learning was formerly used to translate text from one language to another, but it is more difficult to apply it to images than words. LeCun says the research team has developed a new way for algorithms to learn to recognize images, even if one part of the image has been changed.
Facebook will release some of the technology behind Seer, but not the algorithm itself because it was trained using data from Instagram users.
Aude Oliva, who leads MIT’s computer perception and cognition lab, says the approach “will enable us to tackle more ambitious visual recognition tasks.” However, Oliva says that the enormous size and complexity of leading AI algorithms such as Seer, which can contain billions or trillions of neural connections or parameters – far more than a conventional image recognition algorithm with comparable performance – also present problems. quantities of computational power, which utilizes the available stock of chips.
Alexei Efros, a professor at UC Berkeley, says the Facebook paper is good evidence of an approach that he believes will be important in advancing AI – learning machines yourself using ‘huge amounts of data’. And as with most progress in AI today, he says, it builds on a series of other advances that have emerged from the same team at Facebook and other research groups in academia and industry.
More great wired stories