Hearing Seminars

CCRMA hosts a weekly Hearing seminar. All areas related to perception are discussed, but the group emphasizes topics that will help us understand how the auditory system works. Speakers are drawn from the group and visitors to the Stanford area. Most attendees are graduate students, faculty, or local researchers interested in psychology, music, engineering, neurophysiology, and linguistics. Meetings are usually from 11AM to 12:30 (or so, depending on questions) on Friday mornings in the CCRMA Seminar Room.

The current schedule is announced via a mailing list. To be added to the mailing list, send email to hearing-seminar-request@ccrma.stanford.edu.  If you have any questions, please contact Malcolm Slaney at hearing-seminar-admin@ccrma.stanford.edu.

Recent Hearing Seminars

  • Christophe Micheyl on Small and Big Data Challenges for Hearing Aids

    Date: 
    Fri, 12/05/2014 - 11:00am - 12:30pm
    Location: 
    CCRMA Seminar Room
    Event Type: 
    Hearing Seminar
    All of us our familiar with the basic goal of a hearing aid, amplify sound. But a more difficult issue is how do you figure out the right parameters to help a user? Getting the right feedback from a patient who doesn't understand what they are hearing is difficult. There are dozens (hundreds) of parameters in modern hearing aids.  How do we take a patient's complaint that they can't hear in a restaurant and figure out what that means to their sound-processing needs? It's more than just turning up the volume.
    FREE
    Open to the Public
  • Auditory Imagination and Priming: Pilot projects from the 2014 Telluride Neuromorphic Engineering Cognition Workshop

    Date: 
    Fri, 11/21/2014 - 11:00am - 12:30pm
    Location: 
    CCRMA Seminar Room
    Event Type: 
    Hearing Seminar
    A lot of work is done to understand the bottom-up pathways in the brain. This summer’s work looked at top-down influences. Just how do auditory imagination and priming affect what we hear? More importantly, can we see evidence of priming or auditory imaginations via either psychoacoustics or with EEG measurements? The answer is a tentative yes.

    At this week’s Hearing Seminar, I want to describe several pilot experiments that were done over the summer. This work was part of the Telluride Neuromorphic Cognition workshop that is held every summer in Telluride, Co. It’s a rather scenic location, but is totally inundated with auditory perception nerds (and others) for three weeks of working workshop. Science in the mountains. Imagine that.

    FREE
    Open to the Public
  • Noise reduction using artificial auditory neurons

    Date: 
    Fri, 11/07/2014 - 11:00am - 12:30pm
    Location: 
    CCRMA Seminar Room
    Event Type: 
    Hearing Seminar
    Animals throughout the animal kingdom excel at extracting individual sounds from competing background sounds, yet current state-of-the-art signal processing algorithms struggle to process speech in the presence of even modest background noise. Recent psychophysical experiments in humans and electrophysiological recordings in animal models suggest that the brain is adapted to process sounds within a restricted domain of spectro-temporal modulations found in natural sounds. We show how an artificial neural network trained to detect, extract and reconstruct the spectro-temporal features found in speech can significantly reduce the level of the background noise while preserving the foreground speech quality, improving speech intelligibility and automatic speech recognition along the way.
    FREE
    Open to the Public
  • Prof. Simon Carlile on Binaural Representations

    Date: 
    Fri, 10/31/2014 - 11:00am - 12:30pm
    Location: 
    CCRMA Seminar Room
    Event Type: 
    Hearing Seminar
    How is binaural hearing processed and represented in the brain? We have an almost magical ability to perceive the location of sounds. We know the basic cues (interaural level differences, and interaural time differences) but how does the eventual location get represented? Conventional wisdom is that is represented along a linear axis. But could it be represented a different way? I dare say that perceptual representations are the biggest piece of the neurological puzzle that we are missing….

    Who: Prof. Simon Carlile (University of Sydney)
    What: Six degrees of spatial separation - The portal for auditory perception
    When: Friday October 31, 2014
    Where: CCRMA Seminar Room (Top Floor of the Knoll at Stanford)

    Open to the Public
  • Matt Hoffman on a Learned Source-Filter Model of Speech

    Date: 
    Fri, 10/17/2014 - 11:00am - 12:30pm
    Location: 
    CCRMA Seminar Rooom
    Event Type: 
    Hearing Seminar
    We propose the product-of-filters (PoF) model, a generative model that decomposes audio spectra as sparse linear combinations of "filters" in the log-spectral domain. PoF makes similar assumptions to those used in the classic homomorphic filtering approach to signal processing, but replaces hand-designed decompositions built of basic signal processing operations with a learned decomposition based on statistical inference. When applied to speech, PoF discovers a source-filter representation of speech, despite its lack of any explicit prior knowledge about the mechanisms of vocalization. The PoF model can be used as a prior in more complicated models, permitting applications to problems such as dereverberation and bandwidth expansion.

    Bio:

    FREE
    Open to the Public
  • Special talk in CCRMA hearing seminar: Binaural beats, brain rhythms, and binaural hearing

    Date: 
    Fri, 10/10/2014 - 3:00pm - 4:00pm
    Location: 
    CCRMA classroom (Knoll, 217)
    Event Type: 
    Hearing Seminar

    Binaural beats, brain rhythms, and binaural hearing

    FREE
    Open to the Public
  • Bernard Ross on Binaural beats, brain rhythms, and binaural hearing

    Date: 
    Fri, 10/10/2014 - 3:00pm - 4:30pm
    Location: 
    CCRMA Seminar room
    Event Type: 
    Hearing Seminar
    Two tones with slightly different frequencies, presented to both ears, interact in the central auditory brain and induce the sensation of a beating sound. At low difference frequencies, we perceive a single sound, which is moving across the head between the left and right ears. The percept changes to loudness fluctuation, roughness, and pitch with increasing beat rate. To examine the neural representations underlying these different perceptions, we recorded neuromagnetic cortical responses while participants listened to binaural beats at continuously varying rate between 3 Hz and 60 Hz. Binaural beat responses were analyzed as neuromagnetic oscillations following the trajectory of the stimulus rate.
    FREE
    Open to the Public
  • Bryan Pardo on Crowdsourcing Audio Production Interfaces

    Date: 
    Mon, 08/25/2014 - 4:00pm - 5:30pm
    Location: 
    Seminar Room
    Event Type: 
    Hearing Seminar

    Potential users of audio production software, such as audio equalizers, may be discouraged by the complexity of the interface and a lack of clear affordances in typical interfaces. We seek to simplify interfaces for task such as audio production (e.g. mastering a music album with ProTools), audio tools (e.g. equalizers) and related consumer devices (e.g. hearing aids). Our approach is to use an evaluative paradigm (“I like this sound better than that sound”) and the use of descriptive language (e.g. “Make the violin sound ‘warmer.’”). To build interfaces that use descriptive language, a system must be able to tell whether the stated goal is appropriate for the selected tool (e.g.

    FREE
    Open to the Public
  • Bowon Lee on Sound Source Localization by Machines

    Date: 
    Fri, 05/23/2014 - 11:00am - 12:30pm
    Location: 
    CCRMA Seminar Room
    Event Type: 
    Hearing Seminar
    The use of voice commands for human-computer interaction is becoming more prevalent thanks to the recent advancements of automatic speech recognition (ASR) technologies. In typical acoustic environments, audio captured by a microphone contains background noise, reverberation, and signals from interfering sources, making reliable speech capture a challenging problem. Some applications even require more than one user to interact with the system, e.g., gaming, which makes simultaneous speaker detection and localization crucial for enabling natural interactions. Distant multi-speaker speech capture often benefits from the use of microphone arrays that can provide enhanced speech signals using spatial filtering, or beamforming.
    FREE
    Open to the Public
  • Everything you wanted to know about pitch perception....

    Date: 
    Fri, 04/11/2014 - 11:00am - 12:30pm
    Location: 
    CCRMA Seminar room
    Event Type: 
    Hearing Seminar
    I want to review several theories of pitch perception this week at the CCRMA Hearing Seminar. There are models based on spectral profiles (obviously wrong :-), temporal models (too good), and engineering approaches (not perceptual). And even newer work on using learning. How can these approaches be combined to find something that always works? Something that explains human perception?

    Who: Malcolm Slaney (CCRMA)
    What: Everything you wanted to know about pitch perception
    When: Friday April 11 at 11AM
    Where: CCRMA Seminar Room (Top floor of the Knoll at Stanford)
    Why: What is more fundamental than pitch?

    Bring your ideas, and we’ll see if there is a middle ground.

    FREE
    Open to the Public