Stefania Serafin - Multisensory experiences for hearing rehabilitation
Date:
Fri, 10/21/2022 - 11:50am - 12:20pm
Location:
CCRMA Stage (Upstairs)
Event Type:
Hearing Seminar 
Our eyes and our ears are pretty good at working together. But what can touch add to our perception? We can certainly *feel* the bass from loud music. And braille is a good way to convey language. Can touch add to the information conveyed by audio? Most importantly how do we get the information from the two modalities to fuse?
I’m very happy that Prof. Stefania Serafin is returning to CCRMA to talk about her work using haptic (touch) information to convey more information to users that are profoundly hard of hearing. The tools we have to help those with less than perfect hearing are limited. Can we use haptic signals, applied to the fingers, wrist or other parts of the body to help these users better understand the audio around them? In her latest work Prof. Serafin uses haptic feedback to help convey musical information to users with cochlear implants (which do an adequate job for speech, but are lousy for music.)
Who:Stefania Serafin (Aalborg, Denmark)
What:Multisensory Experiences for Hearing Rehabilitation
When:Friday October 21 at 11:50AM
Where:CCRMA Stage (upstairs)
Why:How does the brain integrate information?
Note, Stefania’s talk (and the Hearing Seminar) have been incorporated into an all-day CCRMA open house. The other talks that surround the Hearing Seminar talk are shown below. Come for any talks that you think you will like. All the details are at: https://ccrma.stanford.edu/events/ccrma-open-house-2022
9:30-9:40: Prateek Verma, Jonathan Berger, Chris Chafe, "Audio Understanding and Room Acoustics in the Era of AI”
9:45-9:55: Nicholas Shaheed, "Ganimator: Live, Interactive Animation With Generative Adversarial Networks”
10:00-10:10: Nils Tonnätt, "Simulating the Scalability of Audio over IP Networks”
10:15-10:25: Pat Scandalis, Nick Porcaro, Julius Smith, Jordan Rudess, "MPE/MIDI (MIDI Polyphonic Expression) for Instrument Creators”
10:30-10:40: Ge Wang, Kunwoo Kim, "Project VVRMA (CCRMA in VR): Adventures in Computer Music Land “
11:00-11:10: Romain Michon, Chris Chafe, Fernando Lezcano-Lopez, Julius Smith, Dirk Roosenburg, Mike Mulshine, Tanguy Risset, Maxime Popoff, "PLASMA: Pushing the Limits of Audio Spatialization with eMerging Architectures”
11:15-11:25: Hongchan Choi, "Web Music Technology: Now and Where To?”
11:30-11:40: Barbara Nerness, Tysen Dauer, Takako Fujioka, "Pauline Oliveros’s Investigations Into the Effects of Sonic Meditations on Behavior and Neurophysiology”
**** 11:50-12:20: Stefania Serafin, Emma Nordahl, "Multisensory Experiences for Hearing Rehabilitation”
12:30-12:40: Nick Porcaro, Julius Smith, Pat Scandalis, "First sighting of GeoShred on Linux”
12:45-12:55: Diana Deutsch, "The 'Phantom Words' Illusion”
9:45-9:55: Nicholas Shaheed, "Ganimator: Live, Interactive Animation With Generative Adversarial Networks”
10:00-10:10: Nils Tonnätt, "Simulating the Scalability of Audio over IP Networks”
10:15-10:25: Pat Scandalis, Nick Porcaro, Julius Smith, Jordan Rudess, "MPE/MIDI (MIDI Polyphonic Expression) for Instrument Creators”
10:30-10:40: Ge Wang, Kunwoo Kim, "Project VVRMA (CCRMA in VR): Adventures in Computer Music Land “
11:00-11:10: Romain Michon, Chris Chafe, Fernando Lezcano-Lopez, Julius Smith, Dirk Roosenburg, Mike Mulshine, Tanguy Risset, Maxime Popoff, "PLASMA: Pushing the Limits of Audio Spatialization with eMerging Architectures”
11:15-11:25: Hongchan Choi, "Web Music Technology: Now and Where To?”
11:30-11:40: Barbara Nerness, Tysen Dauer, Takako Fujioka, "Pauline Oliveros’s Investigations Into the Effects of Sonic Meditations on Behavior and Neurophysiology”
**** 11:50-12:20: Stefania Serafin, Emma Nordahl, "Multisensory Experiences for Hearing Rehabilitation”
12:30-12:40: Nick Porcaro, Julius Smith, Pat Scandalis, "First sighting of GeoShred on Linux”
12:45-12:55: Diana Deutsch, "The 'Phantom Words' Illusion”
CCRMA’s Covid policy is as follows: Stanford still requires masking in class except when speaking. CCRMA staff prefer to err on the side of inclusion and universal design by continuing to mask indoors and we ask you to consider doing the same. Don’t come to campus if you are feeling sick. This is especially important as the Hearing Seminar meets in a small room to promote discussions. I hope that continues.
We are not yet at the stage of having audio-haptic illusions (à la the McGurk effect where visual cues change audio perception) but I think integration across modalities is a difficult and profound question. I’m looking forward to hearing Prof. Serafin’s success and what limits she found.
- Malcolm
Abstract:
In this talk I will present different research projects we are currently involved in at the Multisensory Experience lab at Aalborg University in Copenhagen (melcph.create.aau.dk). Specifically, I will focus on the collaboration with The Center for Hearing and balance at Rigshospitalet in Denmark. In this collaboration, we use technologies such as custom made haptic interfaces and virtual and augmented reality to help hearing impaired individuals train their listening skills. See for example https://dl.acm.org/doi/10.1007/978-3-031-15019-7_2
Biography:
Prof. Stefania Serafin is a professor at the Department of Architecture, Design and Media technology at Aalborg University in Copenhagen. Prof. Serafin completed her master's degree in acoustics, computer science and signal processing applied to music, from IRCAM (Paris) in 1997.Following, she received her PhD in computer-based music theory and acoustics from Stanford University in 2004. Before becoming a professor Serafin worked as an associate professor and lecturer, also at Aalborg University. As a researcher, her focus lies on sound models and sound design for interactive media and multimodal interfaces. Serafin holds the position as president of the organization, Sound and Music Computing Network, which is a portal for the sound and music computing community. Also, she is the project leader of ‘The Nordic Sound and Music Computing Network (NordicSMC)’, which is a university hub for sound and music computing lead by the fields internationally leading researchers from the Nordic countries.
FREE
Open to the Public