Facebook's chief AI scientist says that Silicon Valley needs to work more closely with academia to build the future of artificial intelligence

Advertisement
Facebook's chief AI scientist says that Silicon Valley needs to work more closely with academia to build the future of artificial intelligence

Yann LeCun facebook ai

Facebook

Facebook's chief AI scientist Yann LeCun.

Advertisement
  • Facebook's chief AI scientist, Yann LeCun, says that letting AI experts split their time between academia and industry is helping drive innovation.
  • Writing for Business Insider, the executive and NYU professor argues that the dual-affiliation model Facebook uses boosts individual researchers and the industry at large.
  • A similar model has historically been practiced in other industries, from law to medicine.


To make real progress in Artificial Intelligence we need the best, brightest and most diverse minds to exchange ideas and build on each other's work. Research in isolation, or in secret, falls behind the leading edge.

According to Nature Index Science Inc. 2017, publications resulting from collaborations not just among academics, which comes most naturally, but between academia and industry more than doubled from 12,672 in 2012 to 25,962 in 2016. The burgeoning dual-affiliation model - where academics actually work inside industry for a time, while maintaining their academic position - makes possible not only technological advances like better speech recognition, image recognition, text understanding, and language translation systems, but also fundamental scientific advances in our understanding of intelligence.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

Dual affiliation is a boon. It benefits not just the AI economy but individual academics - both researchers and students - as well as industry. We need to champion it.

The Economics of Industry-Academia Collaboration

Worldwide spending on AI systems is predicted to reach $19.1 billion in 2018, says International Data Corporation. The number of active AI startups is fifteen times larger than in 2000, per Stanford University. And according to Adobe, the share of jobs requiring AI is 5.5 times higher than in 2013. Things are going pretty well and I'm arguing it's largely thanks to industry-academia collaborations.

Advertisement

For decades, many professors of business, finance, law, and medicine have practiced their profession in the private sector while teaching and doing research at university. A growing number of leading AI researchers, from colleagues here at Facebook AI Research (FAIR) to several of my friends at other technology companies, are embracing a version of dual affiliation. Other academics, such as my old friend Yoshua Bengio at the University of Montréal, have not joined corporate research labs but have played important roles in many companies and startups as advisers or co-founders.

Mark Zuckerberg

AP

Facebook CEO Mark Zuckerberg.

The dual affiliation model allows researchers to maximize their impact. Different research environments lead to different types of ideas. Certain ideas only flourish in academic environments, while others can only be developed in industry where larger engineering teams and larger computing resources are available.

In the past, true collaborations between industry and academia were complicated by overly possessive policies regarding intellectual property - on both sides. But in today's world of fast-paced internet services deployment, owning IP has become considerably less important than turning research results into innovative products as quickly as possible, and deploying them at scale. AI researchers establish priority by publishing their results quickly on open-access repositories such as ArXiv.org. Many papers are accompanied by open-source releases of the corresponding code. This practice has increased the rate of progress of AI-related science and technology and thawed a once icy relationship. Sharing helps everyone now.

Academia and AI

So investment in basic research in industry, and the practice of open research, open-source software, together with a more relaxed attitude towards IP, have made industry-academia collaborations considerably easier and more fruitful than in the past. But we must keep pushing. What drives new technologies like AI is the speed of adoption by the general population, and what often controls that speed is the number and diversity of talented people who can apply themselves to the problem. There are only so many, highly-coveted spots at universities. Meanwhile there's an ever-growing need for top-talent in the industry - we've made a great start with great leaders in key positions, but we need to support - and drive - exponential growth. We need a deeper bench.

Advertisement

Industry partnerships with academic institutions can help. They increase the net number of students who can be expertly trained in AI - giving them the benefit of access to significant computing power and training data with the expectation only that they contribute to the field in the future. The FAIR lab in Paris currently hosts 15 PhD students in residence, co-advised by a FAIR researcher and a professor. Ground-breaking research has come out of this program, and I believe our resident PhD students get a superior research environment and mentoring than in most purely academic environments. The program is so successful that we plan to expand it to 40 students over the next few years. Some students may choose to join FAIR after graduation, but many will choose to join other labs, found a startup, or become professors. This is one way we contribute to the R&D ecosystem.

Facebook office Berlin

Stefanie Loos/Reuters

A file photo of one of Facebook's offices.

The goal for this ecosystem is to improve everyone's opportunity - not only students, but seasoned academics too. Just because renowned researchers welcome new opportunities to participate in research outside of academia, they shouldn't have to jeopardize their own careers - which often happened in the past. Many academics were forced to choose one or the other.

I spent the first 15 years of my professional career in industry research at AT&T Bell Labs, AT&T Labs-Research, and the NEC Research Institute, before becoming a professor at NYU in 2003. When I joined Facebook in 2013, I was fortunate enough to be able to keep my professor position and share my time between FAIR and NYU. My dual affiliation allows me, among other things, to keep educating the next generation of scientists. The same holds for a number of academics working at FAIR today - some 20% of the time, some 50%, and some 80% like me. It's also true for the five key research hires we just announced, who will help build our new Pittsburgh lab and FAIR teams in London, Seattle, Paris, and Menlo Park. The dual affiliation model hedges our personal risk while making our research, and knowledge, more powerful.

Dual Affiliation, Exponential Progress

For us academics, industry affiliation offers any number of benefits: resources in the form of compute power and funding, more collaboration with others, and the opportunity for immediate real-world application of research, at a scale that proves out hypotheses much faster than in a lab. People think such benefits must come with an asterisk - that they'll be expected to be sucked into the shipping product machine. In the right industry environments, this simply isn't the case.

Advertisement

In fact, fundamental research really benefits when it is untethered from the resource hunt. The dual affiliation model lets academics control their own agenda and timeline. Freed from time crunch, they identify research trends in both academia and in industry, and can act upon whichever's most promising. They are not pressured by product groups to bring their research to application, to achieve "real world impact" the way many companies with AI-powered products pressure their AI engineers.

At FAIR, for instance, we want researchers to focus on long-term challenges. And in the process of working towards fundamental scientific advances, we often invent new techniques, develop new tools, or discover new phenomena that turn out to be useful. More often than not, ambitious long-term projects end up having product impact much quicker than we thought. Although FAIR is set up as a basic research lab focused on long-term horizons, our work has had a large impact on products for such applications as language translation, image, video and text understanding, search and indexing, content recommendation, and many other areas.

Yann Lecun

Getty

Yann LeCun.

Some of us in AI are working to solve real-world problems that impact billions of people by applying image, text, speech, audio and video understanding, reasoning, and action planning. At FAIR, we openly share our advances as much as we can, as fast as we can in the form of technical papers, open source code and teaching material. We produce new knowledge and tools to educate people on the latest developments and make science progress faster.

Others in industry, academia and government can innovate on top of our work, creating new products, building new startups, and making new scientific discoveries. Our goals are shared, and these advances are for everyone's benefit. The AI software tools we are producing are used by hundreds of groups for research in high-energy physics, astrophysics, biology, medical imaging, environmental protection and many other domains.

Advertisement

I started my professional career at AT&T Bell Laboratories in the late 1980s, and saw a culture of ambitious, open research that produced many of the innovations that power the modern world. These innovations, including the transistor, the solar cell, the laser, digital communication technology, the Unix system, and the C/C++ language, had a big impact on AT&T. But these and many more discoveries and innovations, a dozen of which won Nobel Prizes and Turing Awards, have had an even bigger impact on the world at large.

That's what we are after, with AI. Understanding intelligence in machines, animals and humans, is one of the great scientific challenges of our times and building intelligent machines is one of the greatest technological challenges of our times. No single entity in industry, academia or public research has a monopoly on the good ideas that will achieve these goals. It's going to take the combined effort of the entire research community to make progress in the science and technology of intelligence.

Yann LeCun is Vice President and Chief AI Scientist at Facebook and Silver Professor at NYU affiliated with the Courant Institute and the Center for Data Science. He was the founding Director of Facebook AI Research and of the NYU Center for Data Science. He received a PhD in Computer Science from Université P&M Curie (Paris). After a postdoc at the University of Toronto, he joined AT&T Bell Labs, and became head of Image Processing Research at AT&T Labs in 1996. He joined NYU in 2003 and Facebook in 2013.

See also:

This column does not necessarily reflect the opinion of Business Insider.

{{}}