Three researchers helped take deep learning mainstream — and transformed the world.
Last week, the $1 million Turing Award — sometimes called the “Nobel Prize of Computing” — was awarded to three pioneers in artificial intelligence: Yann LeCun, Geoffrey Hinton, and Yoshua Bengio.
There’s a cool story behind the work they did.
In the 1980s, researchers briefly got excited about the concept of neural networks, an approach to artificial intelligence that, as the name suggests, resembles how the human brain works. The idea was that rather than following carefully specified rules, neural networks could “learn” the way humans do — by looking at the world. They could start out without preprogrammed preconceptions and make inferences from the data about how the world works and how to work in it.
But after several years of research, the field couldn’t get anywhere with neural net approaches. The hoped-for learning behavior didn’t really materialize, and they underperformed other strategies for AI, like explicitly programming the AI with logical rules to follow. So by the 1990s, the field had moved on.
Hinton, LeCun, and Bengio, though, never really gave up on the idea. They kept tinkering with neural nets. They made substantial improvements on the original concept, including adding “layers” — a structure for organizing the “neurons” in a neural net that significantly improves performance. And eventually, it turned out that neural nets were as powerful a tool as we could have hoped for; they just need powerful supercomputers, and tons of data, to be useful.
We didn’t have computers powerful enough to take advantage of neural nets until early this decade. When we developed those computers, the neural net AI breakthroughs started. Suddenly, AI and neural nets could be used for image recognition. For translation. For voice recognition. For game-playing. For biology research. For generating text that reads almost as if it were written by a human.
We began to invent different ways of configuring neural nets so that we could get better results from them. For example, to make photorealistic pictures of humans that never existed, you actually train two neural nets: One learns to draw pictures, and the other learns to judge between machine-drawn pictures and real-life ones.
The paradigm LeCun, Hinton, and Bengio had stubbornly kept working on became the biggest game in town. Today, LeCun is vice president and chief AI scientist at Facebook. Hinton works for Google Brain and the University of Toronto. Bengio founded a research center at the University of Montreal.
And worldwide, thousands of researchers work on neural nets, hundreds of billions of dollars have been invested in hundreds of AI startups, and we keep discovering new applications. There’s no question the Turing Award is richly deserved — rarely does an idea take a field by storm like this.
Watching the field of AI be transformed raises questions about where it’s headed
There’s another way the field of artificial intelligence has been transformed in the past 10 years: Concerns about the societal effects of artificial intelligence are now being taken much more seriously.
There are many possible reasons for that, of course, but one driving factor is the pace of progress in AI over the past decade. Ten years ago, many people felt confident in asserting that truly advanced AI, the kind we had to worry about, was centuries away.
Now, AI systems powerful enough to raise ethical questions are already here, and it’s no longer clear how distant general AI — AI that surpasses human capabilities across many domains — is.
LeCun, Bengio, and Hinton all take AI ethics concerns quite seriously, though they stop short of endorsing fears that their creation will wipe us off the earth. (Hinton’s stance, the most pessimistic of the three, is that nuclear war or a global pandemic will probably get there first.)
“If we had had the foresight in the 19th century to see how the Industrial Revolution would unfold,” Bengio says in his chapter of the 2018 book Architects of Intelligence, “maybe we could have avoided much of the misery that followed. … The thing is, it’s going to take probably much less than a century this time to unfold that story, and so the potential negative impacts could be even larger. I think it’s really important to start thinking about it right now.”
Witnessing the astounding rush of progress this decade is enough to instill caution — and leave us with a lot of uncertainty about what to expect next. A paradigm that many had dismissed as irrelevant turned out, once we had good enough computers, to be an incredibly powerful tool. New applications and new variations were discovered. It’s enough to make you wonder whether that could happen again.
Are there other AI techniques that most researchers aren’t paying attention to but that will break through once computers get better and we finally have tools powerful enough to take advantage of them? Will we keep inventing variants of neural nets that make once-unsolved problems look easy?
It’s hard to predict. But seeing the field totally transformed in the space of a decade gives a sense of how fast, startling, and unpredictable progress can be.
Sign up for the Future Perfect newsletter.Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good