Center for Computer Research in Music and Acoustics
Blair Kaneshiro and Jacek P. Dmochowski will talk about the connection between neuroimaging methods (i.e. EEG, MEG, fMRI, etc.) and music-information retrieval.
- 1 of 3
Automatically Learning the Structure of Spoken Language Without Supervision
Aren Jansen, Google Machine Hearing Group
Massive-multichannel sound presentation approaches like Wave Field Synthesis and Higher-Order Ambisonics offer significantly more degrees of freedom for designing the sound field to be synthesized than conventional approaches like Stereophony or Surround Sound. While a significant number of achievements regarding the presentation of the direct sound of virtual sound sources are available, the presentation of reverberation has always been a stepchild. This talk gives an overview over current work on all components of reverberation, i.e. early reflections, late reverberation, and room modes. The talk targets a general audience.
Just ten days after its world premiere CCRMA hosts the performance of Shifting/Drifting for solo violin and real time computer processing by Pulitzer Prize–winning composer Roger Reynolds featuring acclaimed English violinist Irvine Arditti, founder of the celebrated Arditti Quartet, with computer musician Paul Hembree. The performance will be preceded by an introduction into their collaborative process and a workshop where they will demonstrate the musical sources and the algorithmic strategies within the piece.
Signal Processing, Plasticity and Pattern Formation in Networks of Neural Oscillators
- 1 of 3