Center for Computer Research in Music and Acoustics
Upcoming Events
Nat Condit-Schultz on Tempo, Tactus, Rhythm, Flow: Computational Hip Hop Musicology in Theory and Practice
Concepts and Control: Understanding Creativity in Deep Music Generation
Abstract: Recently, generative AI has achieved impressive results in music generation. Yet, the challenge remains: how can these models be meaningfully applied in real-world music creation, for both professional and amateur musicians? We argue that what’s missing is an interpretable generative architecture—one that captures music concepts and their relations, which can be so finely nuanced that they defy straightforward description. In this talk, I will explore various approaches to creating such an architecture, demonstrating how it enhances control and interaction in music generation.
Juhan Nam, "My Journey Toward Musically Intelligent Machines"
Creating intelligent machines that can listen to, play, and even making music has been a longstanding human ambition. Recent advancements in AI, especially through deep learning, have brought us closer to realizing this vision. In this talk, I will share my personal journey in developing musically intelligent machines, beginning with my PhD research on music representation learning during the early days of deep learning, and continuing with my collaborative work with students over the past decade at KAIST. Key topics will include bridging music audio with language, human-AI music ensemble performances, and neural audio processing.
Bio
Tristan Peng's Piano Recital
Recital Program
- Prokofiev Piano Sonata No. 3, Op. 28
- Bach Toccata in E Minor, BWV 914
- Chopin Nocturnes Op. 48
- Ravel Miroirs
Foreign/Domestic
FREE and Open to the Public | In Person + Livestream
Recent Events
Lobe Concert: Goodbye Sam & Nolan!
Lobe is Ethan Buck, Sam Silverstein, Nolan Miranda, Daiki Nakajima, Michael Hayes, and Mark Rau in spirit
Tech: Sami Wurm
FREE and Open to the Public | In Person + Livestream
Demo of Personalized 3d Sound System
Leslie Famularo on Differentiating and Optimizing an Auditory Model
New software paradigms such as JAX and PyTorch allow one to specify arbitrary computations in a way that can be differentiated. And if we can differentiate a function we can optimize it. Hurray. How can we express an auditory model in a differentiable fashion?
4D Audio-Visual Learning: A Visual Perspective of Sound Propagation and Production
Past Live Streamed Events
Recent News
Jonathan Berger's "My Lai" In the News
"In My Lai, a monodrama for tenor, string quartet, and Vietnamese instruments, composer Jonathan Berger had countless tragic elements at his disposal... In this immersive performance, we had the sense that, rather than defaulting to the story's obvious tragic details, Berger illuminate a single, more subtle element - the outraged bewilderment we often feel in the face of unimaginable horror."
Issue 21 of the Csound Journal Released
http://csoundjournal.com/issue21/index.html
This issue of the Csound Journal features an article written by MST student Paul Batchelor, which can be found here:
http://csoundjournal.com/issue21/chuck_sound.html
John Chowning Interview on RWM
Sonifying the world: How life's data becomes music
"Unlike sex or hunger, music doesn’t seem absolutely necessary to everyday survival – yet our musical self was forged deep in human history, in the crucible of evolution by the adaptive pressure of the natural world. That’s an insight that has inspired Chris Chafe, Director of Stanford University’s Center for Computer Research in Music and Acoustics (or CCRMA, stylishly pronounced karma).