Center for Computer Research in Music and Acoustics
CCRMA Summer Workshops
Summer 2024 Workshops: CCRMA Summer Workshops Announced! There are a wide variety of offerings, some in person, some on line, and some hybrid. Have a look! More will be announced as they're organized, so check back with us frequently!
[Check out the schedule] [Register for workshops]
There will be opportunities for financial assistance for some workshops - check specific pages for more details.
CCRMA Open House 2024
Upcoming Events
Guillermo Galindo: Nexo Organico/Organic Nexus
NeuralNote: An Audio-to-MIDI Plugin Using Machine Learning
Abstract: NeuralNote is an open-source audio-to-MIDI VST/AU plugin that uses machine learning for accurate audio-to-MIDI transcription. This talk will begin with an in-depth look at BasicPitch, the machine learning model from Spotify that powers NeuralNote. We will explore its internal workings and how it processes audio to generate MIDI data. Next, we will cover the integration of BasicPitch into the NeuralNote plugin, implemented in C++ using the JUCE framework. We will discuss the challenges of incorporating neural network inference in audio plugins, focusing on real-time processing, thread safety, and performance. A comparison of the ONNXRuntime and RTNeural libraries will highlight the options for neural network integration in this domain.
Iran Sanadzadeh: Frames of Reference
FREE and Open to the Public | In Person + Livestream
Robert L. White's Cochlear Implants
Join us for a special Stanford Hearing Seminar on the invention of the cochlear implant speech processor. May 31st at 10:30AM in Stanford BMI 1021
AI-based Digital Synthesizer Preset Programming: Parameter Estimation for Sound Matching
Presenter: Soohyun Kim
- 1 of 2
- ››
Recent Events
Lloyd May on Audio Processing Strategies to Enhance Cochlear Implant Users' Music Enjoyment
Who: Lloyd May (CCRMA)
What: Designing Audio Processing Strategies to Enhance Cochlear Implant Users' Music Enjoyment
When: Fri, 05/10/2024 - 10:30am - 12:00pm
Where: CCRMA Seminar Room, Top Floor of The Knoll at Stanford
Why: How do we stimulate our brains with electricity.
Pulse Audition: Leveraging DNN-Based Speech Enhancement for Improved Communication in Assistive Listening Devices
This talk explores the evolution of Deep Neural Network (DNN) based approaches in speech enhancement and source separation. Beginning with a historical overview, it traces the progression from traditional methods to the current state-of-the-art techniques. Emphasizing the persistent challenge of speech intelligibility in noisy environments, the discussion transitions to the contemporary issue of inadequate performance in conventional hearing aids. The approach adopted by Pulse Audition of integrating DNN-based technologies into hearing assistance systems is then highlighted, presenting a promising avenue for significantly enhancing communication for individuals with hearing impairments.
Galan Trio: Kinesis
FREE and Open to the Public | In Person + Livestream
Stanford Graduate Composers Present: Marco Fusi
FREE and Open to the Public | In Person
Past Live Streamed Events
Recent News
Jonathan Berger's "My Lai" In the News
"In My Lai, a monodrama for tenor, string quartet, and Vietnamese instruments, composer Jonathan Berger had countless tragic elements at his disposal... In this immersive performance, we had the sense that, rather than defaulting to the story's obvious tragic details, Berger illuminate a single, more subtle element - the outraged bewilderment we often feel in the face of unimaginable horror."
Issue 21 of the Csound Journal Released
http://csoundjournal.com/issue21/index.html
This issue of the Csound Journal features an article written by MST student Paul Batchelor, which can be found here:
http://csoundjournal.com/issue21/chuck_sound.html
John Chowning Interview on RWM
Sonifying the world: How life's data becomes music
"Unlike sex or hunger, music doesn’t seem absolutely necessary to everyday survival – yet our musical self was forged deep in human history, in the crucible of evolution by the adaptive pressure of the natural world. That’s an insight that has inspired Chris Chafe, Director of Stanford University’s Center for Computer Research in Music and Acoustics (or CCRMA, stylishly pronounced karma).