Research

Below we have descriptions of some of our recent research projects.

Infant and toddler listening in noise

ListeningInNoise-400

One of the first tasks facing an infant is learning his or her native language. This is a difficult enough task in a quiet learning environment; yet infants and toddlers are exposed to speech in noisy environments. For example, a caregiver may be talking to an infant while other siblings are playing in the next room. In order to learn speech in these settings, the infant must first separate that speech from background noise such as that provided by TV shows and siblings. How do infants and young children acquire their native language in such settings?

This question has been one of the fundamental topics of our lab's research. We found that typically developing infants do have some capacity to understand speech even in the presence of other talkers. However, infant listeners are far more sensitive to background noise than we expected. They can recognize well-known words (such as their own name) in quiet settings, but fail to do so in noise levels comparable to those found in many day care centers. These findings are particularly important given concerns over the quality of childcare environments, and the impact such environments might have on language acquisition. Our lab is currently conducting follow-up research, exploring the types of noise that are most to infants and toddlers, and things parents can do to make it easier for infants to attend to their voice. We are also collaborating with labs at McGill University and the University of Toronto to examine whether bilingual infants might be better at perceiving speech in noisy environments.

ListeningInNoise_2-400

We are also expanding this work to other populations by examining the ability of children with autism to understand speech in the presence of background noise. This work is being conducted in collaboration with faculty members in the departments of Psychology and Hearing & Speech Sciences that make up part of the University of Maryland Autism Research Consortium

Here are some example sounds from a recent word-learning study; the target and background speakers are speaking at the same loudness level.

Children's understanding of speech from a cochlear implant

SpeechFromCI-400

Cochlear implants (or CIs) are prosthetic devices that provide the perception of sound to severely hearing-impaired listeners. But because of both technological and biological limitations, these devices cannot transmit the full speech signal. Currently, implant technology is entirely based on work with adults, but children are frequently implanted, often as young as 12 months of age. If we understood how well children could understand these kinds of limited signals, we could design better implants suited directly to children’s needs.

Our lab has been conducting a number of studies examining how well children with normal hearing perceive speech that simulates that heard through a cochlear implant. That is, how well children can compensate for “partial” (sparse) auditory signals. This work is in collaboration with other researchers in the Department of Hearing & Speech Sciences and at BoysTown National Research Hospital.

Adjusting for variation in who is speaking

No two people produce sounds in exactly the same way, and no single person speaks the same way across situations. Some talkers speak slower than others, some talkers speak with a different accent or dialect, and some talkers change their pitch when they get excited.

dialogue-one-person-is-speaking-and-one-listening-24141007

How do we adjust for these differences? For example, how do adults identify that djuh, didjuh, and did you all mean the same thing? How does a young listener come to realize which differences in the signal are the important ones (such as the fact that bat and pat mean different things, but car produced by someone from the Deep South means the same thing as car produced by someone from New England)? We have been examining this issue both with regards to adults and children, examining such questions as how listeners adjust their perception for the rate at which speakers talk, and how toddlers learn to understand speakers with foreign accents.

Finding the right words

Adult speakers know thousands of words and must select the correct one to match the incoming speech signal; the same problem occurs when speaking - the speaker has to select and retrieve the appropriate words to match the concepts he or she wants to express from thousands of possible choices. Both processes require fast and accurate access to the mental dictionary (called the lexicon).

We have all occasionally experienced situations where we just couldn't find the right word (a "tip-of-the-tongue" moment); these situations occur more often when we are tired or stressed, and they occur more often as we age. They also occur particularly often for some clinical populations, such as children with specific language impairment, high-schoolers with learning disabilities, and even adults who have recently suffered a concussion. We have been investigating what properties of words influence the ease with which they are accessed and used, and how these processes differ in clinical populations. Understanding the underlying cause of word-finding difficulties may make it easier for us to identify a way to compensate.

Breaking up the speech signal

Have you ever listened to someone speaking a language you don't know? Often, it sounds like they never stop to breathe – that they don't have any pauses between words. In fact, English is no different: speakers do not put breaks between words when they talk (unlike in this written text)! breaking up speech But because we know the language, we can identify where the breaks should be, and we insert them on our own. This is referred to as "segmentation", and understanding how listeners do this is a major question in the field of speech perception. Now imagine what it must be like for a young child who hasn't yet learned their native language. Our lab is exploring what types of information listeners use to help them identify the individual words in the speech signal.

Brain injury and language development

Each year nearly half a million children go to the hospital with a head injury. Children and teens have higher risk for brain injuries than adults, and often require longer recoveries. Although physical effects of a mild injury like a concussion typically subside in a few weeks, some people continue to experience difficulties in thinking, concentrating, finding the right words, or participating in conversation after they return to their typical activities. For children, these difficulties can be especially problematic because they can affect their education and social growth, yet the effects of concussion on children's language are poorly understood. Our research builds off of our understanding of language development to explore how early brain injury can impact the development and maintenance of language, critical thinking, and emotion in young children and adults. Click here to learn more about our projects in this area.

BITTSy: Behavioral Infant and Toddler Testing System

We are working on developing a new experimental testing platform that will allow researchers anywhere in the world to set up the same system in a uniform manner and to use multiple testing paradigms with the same laboratory setup. This will allow researchers to better collaborate and compare results across locations, and thus facilitate our understanding of early language development across diverse circumstances.