Heather Bortfeld (UConn) on How Infants Learn to Process Sound with a Cochlear Implant
Cochlear implants have the biggest impact on very young children. Their brains are still quite plastic and they can learn to understand langauge. It's really quite amazing that we all learn to associate neural firings from our auditory nerve with speech. And cochlear implant patients allow us to study how a new (and slightly older) set of kids respond to their first audio stimulations. How is it that we figure out that some (auditory nerve) firings correspond to other neural firings from the visual system? How do we turn this all into language?
Who: Heather Bortfeld (University of Connecticut)
What: Infants will tune to (pretty much) anything: implications for pediatric cochlear implant users
When: Friday January 25th at 1:15PM
Where: CCRMA Seminar Room, Top Floor of the Knoll
Why: Because learning to perceive sound and speech is pretty amazing
Infants will tune to (pretty much) anything: implications for pediatric cochlear implant users
Heather Bortfeld (U Conn)
Cochlear implants improve the ability of profoundly deaf children to understand speech by allowing a way for sound to be transmitted to the auditory system, despite the lack of a working conduction system in the inner ear. Much of what we know about the course of auditory learning following cochlear implantation in young children is based on behavioral indicators that they are able to perceive sound. However, congenitally-deaf children have no concept of what sound is, and thus have highly variable behavioral responses when initially exposed to it. In recent work, my collaborators and I have begun tracking changes in cortical activity in infants and young children in response to specific auditory stimulation following cochlear implantation. We are also testing typically developing infants’ ability to match degraded audio speech streams (e.g., sine wave speech) to the corresponding visual speech stream. Results from the latter work demonstrate that preverbal infants are quite flexible in what auditory information they are able to identify as speech. These findings are consistent with a multi-stage view of audiovisual speech integration and suggest that infants initially combine audiovisual information based on low-level perceptual cues. This has theoretical and practical implications for pediatric cochlear implant users.
Heather Bortfeld received her Ph.D. in Experimental Psychology in 1998 from the State University of New York at Stony Brook, where her research focused on language accommodation and change during conversations between native and non-native speakers. She then completed a postdoctoral fellowship in the Department of Cognitive and Linguistic Sciences at Brown University with NRSA support from NIH, where she began working with preverbal infants. While working in Jim Morgan’s lab at Brown, she learned to test infant speech perception using a variety of behavioral technique. Her first tenure-track position was in Cognition at Texas A&M University, where she began applying near-infrared spectroscopy (NIRS), an emerging neurophysiological technique, to the study of infant speech perception. She also began collaborating with John Oghalai, then at Baylor College of Medicine, now at Stanford Medical School, using NIRS to assess changes in cortical activity preceding and following cochlear implantation. She moved to the University of Connecticut as an Associate Professor in 2009, where she is Director of the Husky Pup Language Lab, as well as a Senior Scientist at Haskins Laboratories. In her current research, she continues to focus on infant speech perception and language development, while also developing NIRS for use in basic research and for clinical applications, Her work has appeared in Developmental Science, Neuroimage, Human Brain Mapping, and Psychological Science. She is supported by the National Institutes of Health and the National Science Foundation.