Alex Brandmeyer: Auditory perceptual learning using decoded-EEG neurofeedback
Date:
Wed, 04/24/2013 - 5:15pm - 7:00pm
Location:
CCRMA Classroom, The Knoll 2nd floor, Rm 217
Event Type:
Colloquium Auditory perceptual learning is a process in which skills of auditory perception improve through both passive and active exposure to sounds in the environment, and which underlies our abilities to perceive language and music. Individual differences in these perceptual skills can be observed in both our behaviors and in our brains’ automatic responses to sounds (i.e. auditory evoked responses). The results of these experiments suggest that,depending on stimulus features and participant instructions, the presentation of such feedback can lead to the modulation of distinct components of the auditory event-related potential. The ability to selectively modulate brain activity underlying ongoing perception via neurofeedback could eventually lead to a novel type of brain-computer interface for learning paradigms involving both healthy and clinical populations.
This talk reports on the results of a study that applied a series of multivariate pattern classification analyses to an EEG dataset collected from native and non-native speakers of English to investigate these analyses’ ability to track ongoing passive auditory perception at the single-trial level. The research also includes a series of pilot studies in which a similar classification approach was used to realize a novel form of neurofeedback based on single-trial measurements of evoked responses to simple auditory tone stimuli.
This talk reports on the results of a study that applied a series of multivariate pattern classification analyses to an EEG dataset collected from native and non-native speakers of English to investigate these analyses’ ability to track ongoing passive auditory perception at the single-trial level. The research also includes a series of pilot studies in which a similar classification approach was used to realize a novel form of neurofeedback based on single-trial measurements of evoked responses to simple auditory tone stimuli.
bio
I'm a PhD student in the Cognitive Artificial Intelligence department at the Donders Centre for Cognition, working in the Brain Computer Interface (BCI) research group. My main interests lie in multi-modal perception, music cognition, music and digital audio technologies, artificial intelligence, and cognitive neuroscience. Previously, I have participated in research on the perception of speech in noisy environments (Cocktail Party Effect), and the accompanying perceptual changes that occur with aging. I also did my Master's thesis and 3 years of technical work on the Practice Space project here in Nijmegen, developing real-time visual feedback technology for music education purposes.
I'm a PhD student in the Cognitive Artificial Intelligence department at the Donders Centre for Cognition, working in the Brain Computer Interface (BCI) research group. My main interests lie in multi-modal perception, music cognition, music and digital audio technologies, artificial intelligence, and cognitive neuroscience. Previously, I have participated in research on the perception of speech in noisy environments (Cocktail Party Effect), and the accompanying perceptual changes that occur with aging. I also did my Master's thesis and 3 years of technical work on the Practice Space project here in Nijmegen, developing real-time visual feedback technology for music education purposes.
Open to the Public