Inside the Allen Institute for Artificial Intelligence, known as AI2, everything is a gleaming architectural white. The walls are white, the furniture is white, the counters are white. It might as well have been a set for the space station in 2001: A Space Odyssey.
“The brilliant white was a conscious choice meant to evoke experimental science — think ‘white lab coat’,” said Oren Etzioni, a computer scientist and director of the new institute, which Microsoft co-founder Paul Allen launched this year as a sibling of the Allen Institute for Brain Science, his effort to map the human brain.
Yet for the 30 (soon to be 50) artificial-intelligence researchers, the futuristic surroundings offer a paradoxical note: AI2 is an effort to advance artificial intelligence while simultaneously reaching back into the field’s past.
While Silicon Valley looks to fashionable techniques like neural networks and machine learning that have rapidly advanced the state of the art, Etzioni remains a practitioner of a modern version of what used to be known as GOFAI, for good old-fashioned artificial intelligence.
The reference goes back to the earliest days of the field in the 1950s and ’60s, when artificial intelligence researchers were confident they could model human intelligence using symbolic systems — logic embedded in software programs, running on powerful computers.
Then in the late 1980s, an early wave of commercial artificial intelligence companies failed, bringing on what became known as the “AI winter”. The field was seen as a failure and went into eclipse.
In recent years, however, AI has come roaring back as speech recognition, machine vision and self-driving cars have made progress with powerful computers, cheap sensors and machine-learning techniques. That has started a Silicon Valley gold rush led by Google, Facebook and Apple, drawing outsiders like Alibaba and Baidu in China, all caught up in a frantic race to hire the world’s best machine-learning talent.
But the debate over how to reach genuine artificial intelligence has not ended, and Etzioni and Allen are betting that their path is more pragmatic.
Allen said his decision to fund an artificial intelligence research lab was inspired by the question of how books and other knowledge might be encoded to become the basis for computer interactions in which human questions might be answered more fully.
Etzioni says that the artificial-intelligence field has made incremental advances in areas like vision and speech, but that we have gotten no closer to the larger goal of true human-level systems.
“Driverless cars are a great thing,” he said, but added that the field had given rise to “bad AI, like the NSA is using it or Facebook is using it to track you.”
“We want to be the good guys,” he went on, “and it’s up to us to deliver on that.”
The success or failure of the project, however, will ultimately hinge on whether Etzioni can create a new synthesis of artificial intelligence, weaving together powerful machine-learning tools with traditional logic-oriented software.
Both Allen and Etzioni are skeptical of claims that we may be only years away from machines that think in any human sense.
“Full AI, in the sense of something like HAL in 2001,” Allen wrote in an email interview, “is probably a hundred years away (or more). In reality, we are only beginning to grasp how deep intelligence works.”
Etzioni wants AI2 to set measurable goals to help get a new class of learning systems off the ground. During its first year, the researchers have focused on three projects — one in computer vision (in which computers learn to recognise images), one to build a reasoning system capable of taking standardised school tests, and a third to help scholars deal with the fire hose of information that is inundating every scientific field.
“The narrative has changed,” said Peter Norvig, Google’s director of research. “It has switched from, ‘Isn’t it terrible that artificial intelligence is a failure?’ to ‘Isn’t it terrible that AI is a success?’”