Decoding inner speech from intracranial recordings in the human brain
Date:
Mon, 11/25/2019 - 4:00pm - 5:30pm
Location:
CCRMA Seminar Room
Event Type:
Hearing Seminar Decoding is the process of making sense of continuous brain signals. Over the last few years we’ve talked about decoding speech from brain signals --- estimating the speech that generated the recorded brain signal. A now classic experiment decides which of two audio signals a subject is attending. This has been done with EEG, MEG, or ECoG signals and works quite well. But all this work is all based on an active (external) signal that guides the machine learning. What can you do if there is no audio or other external signal?
Stephanie Martin has been looking at decoding speech from purely imagined signals!!! I’ve tried. This is difficult. What’s the ground truth when everything is inside the head? Can you tell what speech I am imagining? Certainly a large part of the brain is involved in imagined speech, including the motor and perhaps the auditory areas. There is a signal.
Who: Stephanie Martin (UCB->Geneva->US
What: Decoding inner speech from intracranial recordings in the human brain
When: Monday, November 25th at 4PM <<< Note special time on Thanksgiving week!!!
Where: CCRMA Seminar Room
Why: Reading brain signals (and understanding them) is the holy grail!
What: Decoding inner speech from intracranial recordings in the human brain
When: Monday, November 25th at 4PM <<< Note special time on Thanksgiving week!!!
Where: CCRMA Seminar Room
Why: Reading brain signals (and understanding them) is the holy grail!
This talk is at 4PM on Monday (not the usual 10:30 on Friday). Parking is free near CCRMA after 4.
- Malcolm
Decoding inner speech from intracranial recordings in the human brain
Dr. Stephanie Martin — UCSD
Certain brain disorders limit verbal communication despite patients being fully aware of what they want to say. In order to help them to communicate, a few brain-computer interfaces have been proven useful, but they rely on indirect actions to convey information (e.g. performing mental tasks like rotating a cube, calculus or movement attempts). As an alternative, we explored the ability to directly infer intended speech from brain signals using diverse machine learning algorithms. During this presentation, I will present my PhD work that aimed at decoding discrete and continuous inner speech representations from electrocorticographic recordings in epileptic patients. I will also emphasize some of the challenges that are faced when targeting inner speech decoding and opportunities for assistive technologies.
Stephanie Martin received her PhD in neuroscience from the Swiss Federal Institute of Technology, Lausanne (EPFL). She did her thesis at the Brain-Machine Interface Lab of Prof. José Millán, where she evaluated the feasibility to decode directly neural correlates associated with inner speech for targeting natural speech decoding technologies. In particular, she investigated the neural mechanisms of various inner speech representations, such as acoustic features, phonemic features and individual words using intracranial recordings. This work was performed in close collaboration with Prof. Robert T. Knight’s group at UC Berkeley, where she spent her Master thesis and one fourth of her PhD. Stephanie has done a short postdoc at the Auditory Language Lab of Anne-Lise Giraud (University of Geneva), where she studied on how the brain makes predictions about the environment. Currently, Stephanie is doing a postdoc at the Halıcıoğlu Data Science Institute at UC San Diego, where she studies interference during verbal working memory.
FREE
Open to the Public