Dr. Laura-Ann Petitto throws a white lab coat over her peach cardigan and grey skirt before giving a quick tour of the Brain & Language Laboratory for Neuroimaging at Gallaudet University, where she is the scientific director. The walls of the space, which includes an observation room, conference room and neuroimaging laboratory, are covered in photographs, awards, and other mementos collected during her over 40-year career.
Despite already making several discoveries in her field of cognitive neuroscience and developmental cognitive neuroscience, involving bilingualism and the brain, the language capacity of chimpanzees, and the biological bases of language in humans (particularly early language acquisition), she’s still working to uncover more mysteries of the brain and how it develops. She is currently leading eight different studies at Gallaudet and she isn’t just a nominal participant – she is deeply engaged in the work.
The tour ends in the neuroimaging laboratory, where students and a colleague are waiting. Gallaudet is a university for deaf and hard-of-hearing students, and Petitto effortlessly shifts from speaking out loud to using American Sign Language to give instruction on how to prep the lab. At the first station in the lab, Petitto speaks with Adam Stone, a graduate student at Gallaudet in education neuroscience (a scientific discipline Petitto helped found) about where probes were placed on a baby’s brain during a previous experiment.
One of the studies that she’s leading cracked the code of how the human brain is programmed to have peak sensitivity to certain rhythmicity of language, especially in children ages 6-12 months. “The conundrum in human language [was] how does language start when the baby doesn’t know language,” Petitto said. “How does it start when it doesn’t know meaning. So, the brain is agnostic in regards to the meaning of words. The brain has a problem.
“It’s the same problem that you would have if you were in Russia and you didn’t speak Russian and you’re standing in line at a [bank] and the woman in front of you needs a pen. She turns around to you and she asks you in Russian, which you don’t speak, for a pen. Your sense of her speaking to you is ‘My god, she’s speaking to me so fast.’ You can’t even find the beginning and ending of words. It just sounds like a blur.”
Petitto said the same goes for the baby’s brain. “It pops out of the womb, we’re talking to it and it seems like a mush of words. So the first thing a baby has to do is to be able to find the chunks, the words, so that it can solve the problem of meaning and reference.” She said that she has identified the brain tissue that parses words in order to then assign meaning to them and the rhythm at which it does that.
“The brain is set to find interesting and salient rhythmic chunks in the environment around them at about a hertz or a hertz and a half,” Petitto said. “And when we speak to babies, we speak in this sing-song way and it’s the same thing in sign language. The sing-song [quality] of human language is preserved in the hands and there’s this way in which we communicate with babies that matches the baby's brain. The baby is sensitive to specific rhythmicity, that it’s in the environment around [the baby], and that’s how the baby cracks into the linguistic stream.”
They programmed that rhythmicity into a robot and avatar. That's no small task. The coalition that she’s working with is made up of her team at Gallaudet, the Infrared Imaging Lab of the Institute of Advanced Biomedical Technologies, University of Southern California’s Institute for Creative Technologies and Yale University’s Social Robotics Lab. The avatar has the capacity to produce nursery rhymes.
“The reason why we’re going to do this is because there are millions and millions of children who have minimal exposure to language in early life,” Petitto said. “And we know that they have this minimal language exposure during this peak period that’s very important for human language acquisition, from 6-12 months of age. Some children get no language input at this time and we know that it has devastating impact[s] on children’s subsequent ability to learn vocabulary, to learn language and importantly to learn to read.”
The robot, which is named Maki, is the first device that will simulate the human interaction for infants who are at risk for having minimal language exposure early in life. Petitto says it's meant to be augmentative and not a replacement for actual interaction. But one such group that would be aided by this device is deaf children who are slated to get cochlear implants.
“Language is completely withheld from them. They can’t hear spoken language...and the operation is done between 12 months and 18 months and then it takes months and months to tune it and train the children. So these children effectively go with no accessible, useable language anywhere from 18 months to 24 months,” Petitto explained. She said that Maki and the avatar could help keep the child’s brain tissue open for subsequent learning in the future.
Petitto’s mind opened to the mysteries of language at 11 years old, when her curiosity was piqued.
"One of the things that I appreciated very early and that fascinated me is the sensory systems use [like] the human eye….If you’re looking at an apple on the table and right now we opened your optical nerve, we would not see an apple. We would see neurochemical activity and so that’s called the problem of transduction,” she said.
In a preview of the tenacity she’s shown for pursuing knowledge throughout her career, the 11-year-old Petitto went and learned about the problem of transduction after experiencing it through optical illusions. Then it wasn’t long that she turned her attention to language and the brain.
“If you look at the landscape, one of the quintessential ways that our species comes to know is through language. So I naturally gravitated to language.”
Meet More AAAS Members