Skip to main content

The brains behind artificial intelligence

Thumbnail
AAAS Fellow Stuart Russell is using AI to monitor the planet's seismic activity for nuclear shockwaves. (Photo: Peg Skorpinski)

A nuclear weapon detonates. The explosion, an underground test, sends shockwaves through the earth, triggering seismic detectors installed across the globe by the United Nations (UN). The detectors constantly record a sea of vibrations, these nuclear blast waves among them. Complex algorithms scrutinize each of these signals, determining whether it is from an apple falling ten feet from a sensor, a tree collapsing a mile away, a small landslide 50 miles away, a large earthquake a thousand miles away or a nuclear explosion 10,000 miles away.

"It's kind of like listening to several thousand conversations simultaneously and doing speech recognition on all of them at once," says AAAS Fellow Stuart Russell, a computer science professor at the University of California, Berkeley.

Separating the specific signature of a nuclear blast in this way requires a higher form of computing, one that combines the reasoning behavior of the human mind with the data processing power of a supercomputer — essentially an artificial intelligence (AI).

This initiative to monitor the planet's seismic activity for nuclear shockwaves is driven by the UN's Comprehensive Nuclear-Test-Ban Treaty. Already, Russell's new AI-inspired method has enhanced the sensitivity of the monitoring project by a factor of three.

Reasoning, decision-making, learning, natural language, knowledge representation — these are some of the many fields of study in the diverse field of artificial intelligence.

"Unlike other areas, say chemistry or geography," says AAAS Fellow Hector Levesque, "the people in AI work on very different kinds of problems with very different kinds of techniques."

The technologies sprouting from this burgeoning science have made phones smarter, helped computers defeat quiz show champions, enhanced doctors' diagnostic tools, provided individualized education platforms and monitored the planet's seismic activity, to name a few.

An intelligent machine

"It takes two people nine months and 50,000 calories," says AAAS Fellow Gerald Sussman, a professor at the MIT Computer Science and Artificial Intelligence Laboratory. The result is a human being — an intelligent machine containing just a feeble amount of computing power, compared to today's industry standards. 

"A person's total amount of memory can't be more than about a terabit," Sussman calculates. "I can walk into the local computer store and buy a USB stick that's got more memory than I can imagine." About eight flash drives would cover a human brain.

But to understand how our complex, intelligent brains think is the greatest hurdle for scientists in artificial intelligence.

While Stuart Russell's U.N. work is technologically demanding, other scientists are developing intricate 3D simulations of seismic events, which, while immensely useful, require weeks of CPU time on massive supercomputers just to create one simulation. This leads to what Russell says is a core problem in AI — one he's been tackling since the 1980s:  

How does the human mind preform such focused decisions using so little computing power? This question in turn raises another: What actually is intelligence?


"The definition has to do with the ability to choose actions that achieve one's goals," says Russell. "That's a pretty general definition and it's widely accepted as the gold standard for what it means to be rational or what it means to be intelligent. But the problem is that it's not possible."

A computer playing chess, for instance, can't play a perfect move guaranteed to beat its opponent every time, he says. Just running an algorithm for this could take decades. "It's not intelligent to play the right chess move long after your opponent has died," says Russell. "Human beings so naturally channel their actions to their goals." (Playing chess, incidentally, is how Russell's interest in AI first peaked.)

Far more challenging than excelling at chess, a task like making breakfast requires millions of subtle actions that humans do well, while robotic minds, capable of enormous stocks of memory, simply lack the definitive programming.

Unpacking reasoning behavior

For Hector Levesque, a professor at the University of Toronto and a founding member of the American Association for Artificial Intelligence, the best definition for artificial intelligence is found in the science of knowledge and reasoning.

When humans are faced with new tasks, we read books on the subject, research sites online or talk to experts, in the process gathering information that will help us solve our problems quicker. The added input doesn't slow us down. It doesn't make us behave crazy, says Levesque. Our brains don't crash like computers.

"Trying to understand how something like that works is I think one of the most mystifying things we know of in the whole universe," he says.

In reverse-engineering human reasoning behavior, Levesque hopes to reconstruct and translate the human learning process into computer programming. His goal is to represent the information in a way that makes sense not only to computers, but to people as well.

"A computer program in many ways is a meeting of expression and communication," he says. "That's why artificial intelligence is special."

Read our interview with AAAS Fellow Hector Levesque

In other words, humans can vastly improve our own learning and understanding capabilities by developing better ways of communicating with machines.

"One of the driving forces in everything that I've been doing that's related to AI is figuring out ways to express our understanding and knowledge of things," says Gerald Sussman. "One of the problems is that the languages we have for describing things are pretty inadequate."

Explaining complex science like the recent Higgs boson discovery to the general public is inherently inaccurate. Verbal language, says Sussman, isn't the best way to describe the physics. Making an analogy that, for instance, compares how the Higgs field slows particles down and gives them mass to a car that is careening through molasses just isn't enough. Large chunks of this information can only be faithfully explained through complex math and will only then be understood by an individual already versed in that language. 

This is where smart computer programming works best, he says. When Sussman began his career in software development more than 40 years ago, he realized two things: one, that programming was really fun and also that programs could better explain things that English and math couldn't capture as well, like the method of how he arrives at a complex solution.

For more than four decades, Sussman has worked to develop AI software that can interact with people in ways that both can learn.

"I care a lot about making people smarter," he says, adding: "I also care about making machines smarter."

By allowing the two to better communicate, humans can learn more efficiently than from, say, a professor standing at the front of the class working out a problem on his blackboard.


This type of interactive education is already being experimented with in massive online courses, or MOOCs, like the MIT-Harvard University nonprofit collaboration called edX. For this platform, each class is a blend of open-source interactive learning models extended to millions of students across the globe. It allows researchers immediate feedback on what approaches work best, so that their software can adapt to the educational needs of the students. Eventually, AI software could make this adaptation a constant, seamless process specific to each individual student.

Bringing robots to life

AAAS member David Hanson, of Hanson Robotics, is building robots with rubbery humanoid faces that contort into lifelike expressions. Some walk, like the Einstein robot, while others are literally talking heads. Their camera eyes follow movement and where people direct their attention. This while their software is learning how to read emotions, in what Hanson calls "planting the seeds of empathy."

Yet artificial intelligence in robots like Hanson's is still young. The supercomputer called Watson, which famously won a Jeopardy! competition, works by applying to each trivia question an arsenal of information that far exceeds the human brain's capability. Another popular AI example: Apple's Siri software for the iPhone, which acts as an intelligent personal assistant, can answer spoken questions from the user and make recommendations, but the program is highly specified in these tasks.

"We're not putting much effort into it yet," says Sussman. "The machinery, the tools, have only gotten sensible in, say, the last 10 or 15 years."

"People are getting excited again in robotics because there's been enough progress in computer vision that you can start to do more exciting things with robots, like self-driving cars and robots that fold your laundry and things that used to be science fiction," says Russell.

Rather than just an efficient search engine, a smart device that can grasp human reasoning could, for instance, debate the philosophies of Aristotle and Plato when explaining the fundamentals of quantum mechanics, says Sussman. As a learning tool, this would change the entire educational structure, providing every student the one-on-one interaction found only with private tutors today.

"And it would argue with a student and the student gets smart," he says. "There's a killer app that I can imagine being around. That's the future."

Date
Representative Image Caption
AAAS Fellow Stuart Russell is using AI to monitor the planet's seismic activity for nuclear shockwaves. (Photo: Peg Skorpinski)
Blog Name