Center for Computer Research in Music and Acoustics
CCRMA Seeks Facilities Specialist
Happy late summer to all! The staff at CCRMA are *elated* to announce that we are searching for a new person to join our team. Please feel free to ask questions of any of us about the position.
Detailed job posting and application can be found here: https://careersearch.stanford.edu/jobs/facilities-specialist-1-on-site-2...
COVID Policies
See CCRMA's COVID policies for 2023.
Upcoming Events
CCRMA Transitions 2023

FREE and Open to the Public | In Person + Livestream
In person access to these events is based on registration. Reserve your seat here. Please arrive no later than 10 minutes before the show, otherwise your seat may be given away.
Laura Gwilliams on Computational architecture of speech comprehension

I'm really happy to welcome Prof. Laura Gwilliams to Stanford and the Hearing Seminar.
Josh McDermott (MIT) on Auditory Brain Models

Details to follow.
New music exchange with Japan: A creative Residency and Concerts by the Ensemble Kujoyama

FREE and Open to the Public | In Person + Livestream
Carole Kim: Cascade | Dilate Ensemble and Oguri

- 1 of 3
- ››
Recent Events
Schallfeld

Audiovisual Performance | Final Projects | Arts Intensive 2023

FREE and Open to the Public | Livestream
UnStumm: Conversation of Moving Image and Sound | Arts Intensive

UnStumm – conversation of moving image and sound is a project for real-time film and music (Echtzeitfilm) for cross-disciplinary and cross cultural collaboration between video artists and musicians from Germany and other countries. It aims to create an environment of cultural and creative exchange, where a common complex artistic language is invented and used to communicate narratives, and textures, colliding, combining, and attracting worlds of sight and sound. Since 2016 UnStumm has performed in 12 countries worldwide. Collaborations have taken place with more than 65 live video artists, musicians, and dancers. In their performance, UnStumm will combine an in-situ performance with their Augmented Voyage app, making it a mixed reality performance. The audience will experience this performance in space, while using the app at the same time to follow UnStumm's movements between different layers of projection and reality.
[CANCELLED!] TEMPO VS. PITCH: UNDERSTANDING SELF-SUPERVISED TEMPO ESTIMATION

Giovana Morais (NYU) joins us to talk about her recent ICASSP paper. ABSTRACT: Self-supervision methods learn representations by solving pretext tasks that do not require human-generated labels, alleviating the need for time-consuming annotations. These methods have been applied in computer vision, natural language processing, environ- mental sound analysis, and recently in music information retrieval, e.g. for pitch estimation. Particularly in the context of music, there are few insights about the fragility of these models regarding differ- ent distributions of data, and how they could be mitigated. In this paper, we explore these questions by dissecting a self-supervised model for pitch estimation adapted for tempo estimation via rigor- ous experimentation with synthetic data.
- 1 of 4
- ››
Recent News
Poppy Crum Joins Advisory Board for Engineering & Technology Magazine's Innovation Awards
JackTrip: Syncing performances online, Stanford News

"Stanford-developed software enables musicians isolated by the coronavirus pandemic to jam together again in real-time ... A longstanding software program for online music playing has been optimized for slower, home-based internet connections."
https://news.stanford.edu/2020/09/18/jacktrip-software-allows-musicians-sync-performances-online/
By Adam Hadhazy
The Curious Composer: Jonathan Berger
- 1 of 18
- ››