Center for Computer Research in Music and Acoustics
Upcoming Events
Junhyeok Lee (from JHU), "Differentiable Phase Augmentation for Speech Synthesis"
Abstract
One of the most important goals of generative models is achieving a one-to-many mapping. In audio signals, the phase is a key component that contributes to one-to-many mapping, particularly in vocoder tasks. However, current audio generative models often overlook this aspect. In this seminar, we will discuss the application of domain knowledge from digital signal processing to deep generative modeling, including concepts such as the Fourier transform, Jensen-Shannon sampling, Nyquist frequency, quantization, and others.
Chris Brown & Thea Farhadian | Solos and Duo Performance
FREE and Open to the Public | In Person + Livestream
Jill Kries - How the brain encodes speech and language with aging and aphasia
Recent Events
Koubeh
Tristan Peng's Piano Recital
Recital Program
- Prokofiev Piano Sonata No. 3, Op. 28
- Bach Toccata in E Minor, BWV 914
- Chopin Nocturnes Op. 48
- Ravel Miroirs
Juhan Nam, "My Journey Toward Musically Intelligent Machines"
Creating intelligent machines that can listen to, play, and even making music has been a longstanding human ambition. Recent advancements in AI, especially through deep learning, have brought us closer to realizing this vision. In this talk, I will share my personal journey in developing musically intelligent machines, beginning with my PhD research on music representation learning during the early days of deep learning, and continuing with my collaborative work with students over the past decade at KAIST. Key topics will include bridging music audio with language, human-AI music ensemble performances, and neural audio processing.
Concepts and Control: Understanding Creativity in Deep Music Generation
Abstract: Recently, generative AI has achieved impressive results in music generation. Yet, the challenge remains: how can these models be meaningfully applied in real-world music creation, for both professional and amateur musicians? We argue that what’s missing is an interpretable generative architecture—one that captures music concepts and their relations, which can be so finely nuanced that they defy straightforward description. In this talk, I will explore various approaches to creating such an architecture, demonstrating how it enhances control and interaction in music generation.
Past Live Streamed Events
Recent News
LISTEN: 1,200 Years of Earth’s Climate, Transformed into Sound
Science podcast featuring work by our fearless leader, Chris Chafe:
"When you sonify data, you experience time in a way you can’t when you look at a chart." Hal Gordon, Graduate student
Oakum - Eoin Callery
Released from behind the mixing console CCRMA's Concert Coordinator Eoin Callery has been set free to make an old-timey CD for Bay Area Label Eh? Records. Enjoy some amplified violin bow, guitar, and lots of Supercollider controlled feedback, all available on a small shiny disc and in a new fangled digital Bandcamp form.
Jonathan Berger Première
"Classical musicians face enormous expectations when they play a standard repertory work. Listeners have strong feelings about favorite pieces, even when they are open to fresh interpretive approaches.
The stakes are even higher with a premiere. Performing a new piece becomes an act of advocacy to pull an audience in.
Mystery of 101-year-old master pianist who has dementia
From the article: At first glance, she was elderly and delicate – a woman in her 90s with a declining memory. But then she sat down at the piano to play. “Everybody in the room was totally startled,” says Eleanor Selfridge-Field, who researches music and symbols at Stanford University. “She looked so frail. Once she sat down at the piano, she just wasn’t frail at all. She was full of verve.” Read more here...