Center for Computer Research in Music and Acoustics
Summer Workshops 2017 Announced!
Ramani Duriaswami on Creating Scientifically Valid Spatial Audio for VR and AR (Special Thursday Seminar)
This free event is open to anyone and is intended for people who wish to get a broad introduction to Faust. By the end of the day, attendees should know how to write simple Faust codes and convert them into various kind of objects: PD, SuperCollider, CSOUND and Max/MSP externals, iOS and Android apps, Standalone Applications, AU, LV2, VST and LASPA plug-ins, etc. We'll also demonstrate how to integrate the code generated by the Faust compiler to existing projects (e.g., JUCE-based plugin-ins, smartphone apps, etc.).
Abstract: Songwriting, the art of combining melodies and lyrics, poses new challenges to algorithmic composition. ALYSIA is a machine-learning system that learns the relationship between melodies and lyrics, and uses the resulting model to create new songs in the style of the corpus. While ALYSIA creates melodies for user-provided lyrics, another system, MABLE, creates computer generated lyrics that convey a coherent story. In addition to discussing both systems, an original song co-created by ALYSIA and music professor Joshua Palkki will be performed.
Joint work with David Loker, Chris Cassion, Rafael Perez y Perez, and Divya Singh.
Adaptive mixing of noisy and robust beamformers for enhancement, visualization and reproduction of sound fields
Abstract: The NESS project (standing for Next Generation Sound Synthesis), funded through a Starting Grant from the European Research Council for five years beginning on January 1, 2012 is an exploratory project, concerned entirely with synthetic sound—and in particular, numerical simulation techniques for physical modelling sound synthesis in parallel hardware. It is a joint project between the Acoustics and Audio Group and the Edinburgh Parallel Computing Centre, both at the University of Edinburgh. The models developed in the course of the project span a large set of systems, including brass, cymbals and gongs, percussions, guitar/fretboard interaction, bowed strings and large 3D room acoustics simulations.
"Unlike sex or hunger, music doesn’t seem absolutely necessary to everyday survival – yet our musical self was forged deep in human history, in the crucible of evolution by the adaptive pressure of the natural world. That’s an insight that has inspired Chris Chafe, Director of Stanford University’s Center for Computer Research in Music and Acoustics (or CCRMA, stylishly pronounced karma).
Read the full article here!