As humans, language is a key aspect of our lives that allows us to communicate ideas and feelings. But just how do our brains create and process language? AAAS Fellow Tom Mitchell is trying to answer this question by studying both humans and one very knowledgeable computer system he's created named NELL.
As an undergraduate at Massachusetts Institute of Technology in the early 1970s, Mitchell became interested in human intelligence. At that time, however, he felt the field of psychology lacked the tools to study it.
"I thought it was going to work out better to build artificially intelligent systems," he explains with a laugh.
His doctorate in electrical engineering from Stanford University in 1979 focused on machine learning, a branch of artificial intelligence concerned with computer systems that can learn from data. Mitchell is now the founding chairman of the world's only Machine Learning Department, located at Carnegie Mellon University (CMU) in Pittsburgh, Pa.
Three years ago, Mitchell created NELL, the Never-Ending Language Learner, a computer system that is able to learn similarly to the way humans do—in a continuous and cumulative fashion. NELL not only reads and extracts information from over 500 million web pages each day, but also attempts to improve its reading competence so that the next day it can extract even more information from the Web with a higher degree of accuracy.
So far, NELL has accumulated 50 million facts about the world that it believes to be true with varying levels of confidence. "If you look at the two million most confident things that it believes, it's roughly 85 percent correct," says Mitchell.
NELL's knowledge base is available on the Web, where anyone can provide feedback on whether its beliefs are right or wrong. Although NELL is far from perfect, it's getting better, says Mitchell, who works to keep NELL improving all the time. "As its competence is growing," he says, "we're giving it new tasks."
NELL is only one half of Mitchell's research. He's also spent the last ten years using machine learning to understand how the brain represents word meanings. At the start of the project, in collaboration with CMU Professor of Psychology Marcel Just, Mitchell used functional magnetic resonance imaging (fMRI) to study brain activity while subjects were thinking of specific words.
Mitchell developed a computational model that could predict, based on the brain images, which of two concrete nouns a person was thinking about while inside the scanner. The model is 100 percent accurate, even with words and subjects it has never encountered before. Importantly, Mitchell explains, this means that the model captures some of the systematicity in how neural activity is used. "We're starting to understand where these components of neural activity that are used to assemble [word] meaning come from," he says. Perhaps not surprisingly, Mitchell has found that a widely distributed network of brain regions are involved.
More recently, Mitchell has used another brain imaging technique, magnetoencephalography, or MEG, to investigate the millisecond time resolution of language processing. The brain encodes many different features of a word in a short time window—there are roughly 500 milliseconds between when a concrete noun such as "house" is shown and when the brain comprehends its meaning.
In a 2012 paper published in the journal Neuroimage, Mitchell has broken down that half second. He's developed a computational model that predicts where and when in the brain 229 different features of a word are encoded. At 100 milliseconds, the brain knows the number of letters in the word. At 200 milliseconds, it knows if the word is something that's alive. The majority of the other features kick in around 300 to 450 milliseconds.
Mitchell and his lab are now looking at the comprehension of stories, examining fMRI activity as a subject reads a chapter of a Harry Potter novel. Soon, Mitchell hopes to get NELL involved. He wants to have NELL read a paragraph and generate a description of its meaning, and then have a human read the same paragraph in the brain scanner while recording their neural activity.
For example, take the sentence "the brown fox jumps over the fence," says Mitchell. "I want to assume that the brain has to go through the same information processing steps and essentially look for correlations between the steps that NELL takes and the neural activity ... so that we can identify that part of the brain that's making the decision that 'jump' is a verb."
The goal behind putting the two systems together, Mitchell says, is to use NELL as a way of hypothesizing about the intermediate computations that a human does in order to understand a sentence. Those computations, he concludes, are "things that we look for in the neural activity in the brain."