Recent Events https://ccrma.stanford.edu/recent-events/feed en Schallfeld https://ccrma.stanford.edu/events/schallfeld <div class="field field-type-date field-field-event-date"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Date:&nbsp;</div> <span class="date-display-single">Thu, 09/28/2023 - <span class="date-display-start">7:30pm</span><span class="date-display-separator"> - </span><span class="date-display-end">9:00pm</span></span> </div> </div> </div> <div class="field field-type-text field-field-location"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Location:&nbsp;</div> Dinkelspiel Auditorium </div> </div> </div> <div class="field field-type-text field-field-event-type"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Event Type:&nbsp;</div> Concert </div> </div> </div> <div class="field field-type-filefield field-field-image"> <div class="field-items"> <div class="field-item odd"> <img class="imagefield imagefield-field_image" width="200" height="133" alt="" src="https://ccrma.stanford.edu/sites/default/files/user/cobasica/241684551_253163043477802_1519717153984247044_n.jpg?1686377029" /> </div> </div> </div> <div>Schallfeld performs new works by Stanford graduate composers Tatiana Catanzaro, Kimia Koochakzadeh-Yazdi, Mike Mulshine, Se&aacute;n &Oacute; D&aacute;laigh, and Julie Zhu.<br /> &nbsp;</div> FREE and Open to the Public&nbsp;<br /><p><a href="https://ccrma.stanford.edu/events/schallfeld" target="_blank">read more</a></p> Sat, 10 Jun 2023 06:07:45 +0000 cobasica 6559 at https://ccrma.stanford.edu Audiovisual Performance | Final Projects | Arts Intensive 2023 https://ccrma.stanford.edu/events/audiovisual-performance-2023 <div class="field field-type-date field-field-event-date"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Date:&nbsp;</div> <span class="date-display-single">Wed, 09/20/2023 - <span class="date-display-start">7:30pm</span><span class="date-display-separator"> - </span><span class="date-display-end">9:00pm</span></span> </div> </div> </div> <div class="field field-type-text field-field-location"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Location:&nbsp;</div> CCRMA Stage / <a href="https://ccrma.stanford.edu/live">CCRMA LIVE</a> </div> </div> </div> <div class="field field-type-text field-field-event-type"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Event Type:&nbsp;</div> Concert </div> </div> </div> <div class="field field-type-filefield field-field-image"> <div class="field-items"> <div class="field-item odd"> <img class="imagefield imagefield-field_image" width="200" height="161" alt="" src="https://ccrma.stanford.edu/sites/default/files/user/cobasica/audiovisual_performance_2023.jpg?1695177901" /> </div> </div> </div> The students in the Audiovisual Performance class have worked (hard!) on several projects that included relationships between sound and moving image, programming and physical interaction with audio and video material, as well as remixing audiovisual compositions and performing with their digital doppelg&auml;ngers. We are very excited to present their final projects in this live audiovisual show, which will explore various concepts and aesthetics ranging from memory and nostalgia to the political and poetic uses of technology.&nbsp;<br /> &nbsp;<br /> FREE and Open to the Public &nbsp;| &nbsp;<a href="https://ccrma.stanford.edu/live">Livestream</a> <br /> <br /><p><a href="https://ccrma.stanford.edu/events/audiovisual-performance-2023" target="_blank">read more</a></p> Tue, 19 Sep 2023 06:35:16 +0000 cobasica 6579 at https://ccrma.stanford.edu UnStumm: Conversation of Moving Image and Sound | Arts Intensive https://ccrma.stanford.edu/events/unstumm-conversation-of-moving-image-and-sound <div class="field field-type-date field-field-event-date"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Date:&nbsp;</div> <span class="date-display-single">Sat, 09/16/2023 - <span class="date-display-start">7:30pm</span><span class="date-display-separator"> - </span><span class="date-display-end">9:00pm</span></span> </div> </div> </div> <div class="field field-type-text field-field-location"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Location:&nbsp;</div> CCRMA Stage / <a href="https://ccrma.stanford.edu/live">CCRMA LIVE</a> </div> </div> </div> <div class="field field-type-text field-field-event-type"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Event Type:&nbsp;</div> Concert </div> </div> </div> <div class="field field-type-filefield field-field-image"> <div class="field-items"> <div class="field-item odd"> <img class="imagefield imagefield-field_image" width="200" height="200" alt="" src="https://ccrma.stanford.edu/sites/default/files/user/cobasica/unstumm_ttnw.jpg?1693774780" /> </div> </div> </div> <div> <div>The Stanford Arts Intensive program and CCRMA present UnStumm as part of the Audiovisual Performance class.<br /> <br /> <strong><a href="https://unstumm.com/about/">UnStumm</a> &ndash; conversation of moving image and sound </strong>is a project for real-time film and music (Echtzeitfilm) for cross-disciplinary and cross cultural collaboration between video artists and musicians from Germany and other countries. It aims to create an environment of cultural and creative exchange, where a common complex artistic language is invented and used to communicate narratives, and textures, colliding, combining, and attracting worlds of sight and sound. Since 2016 UnStumm has performed in 12 countries worldwide. Collaborations have taken place with more than 65 live video artists, musicians, and dancers. In their performance, UnStumm will combine an in-situ performance with their&nbsp;<a href="https://unstumm.com/augmented-voyage/">Augmented Voyage</a>&nbsp;app, making it a mixed reality performance. The audience will experience this performance in space, while using the app at the same time to follow UnStumm's movements between different layers of projection and reality. &nbsp;</div> </div><p><a href="https://ccrma.stanford.edu/events/unstumm-conversation-of-moving-image-and-sound" target="_blank">read more</a></p> Sun, 03 Sep 2023 21:00:54 +0000 cobasica 6577 at https://ccrma.stanford.edu [CANCELLED!] TEMPO VS. PITCH: UNDERSTANDING SELF-SUPERVISED TEMPO ESTIMATION https://ccrma.stanford.edu/events/cancelled-tempo-vs-pitch-understanding-self-supervised-tempo-estimation <div class="field field-type-date field-field-event-date"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Date:&nbsp;</div> <span class="date-display-single">Fri, 08/25/2023 - <span class="date-display-start">11:00am</span><span class="date-display-separator"> - </span><span class="date-display-end">12:00pm</span></span> </div> </div> </div> <div class="field field-type-text field-field-location"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Location:&nbsp;</div> Classroom </div> </div> </div> <div class="field field-type-text field-field-event-type"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Event Type:&nbsp;</div> Guest Lecture </div> </div> </div> <div class="field field-type-filefield field-field-image"> <div class="field-items"> <div class="field-item odd"> <img class="imagefield imagefield-field_image" width="191" height="200" alt="" src="https://ccrma.stanford.edu/sites/default/files/user/iran/screenshot_2023-07-31_at_12.34.12_pm.png?1690821400" /> </div> </div> </div> <p>Giovana Morais (NYU) joins us to talk about her recent ICASSP paper. ABSTRACT:&nbsp;Self-supervision methods learn representations by solving pretext tasks that do not require human-generated labels, alleviating the need for time-consuming annotations. These methods have been applied in computer vision, natural language processing, environ- mental sound analysis, and recently in music information retrieval, e.g. for pitch estimation. Particularly in the context of music, there are few insights about the fragility of these models regarding differ- ent distributions of data, and how they could be mitigated. In this paper, we explore these questions by dissecting a self-supervised model for pitch estimation adapted for tempo estimation via rigor- ous experimentation with synthetic data.</p><div class="field field-type-text field-field-event-cost"> <div class="field-items"> <div class="field-item odd"> FREE </div> </div> </div> <div class="field field-type-text field-field-intended-audience"> <div class="field-items"> <div class="field-item odd"> Open to the Public </div> </div> </div> <p><a href="https://ccrma.stanford.edu/events/cancelled-tempo-vs-pitch-understanding-self-supervised-tempo-estimation" target="_blank">read more</a></p> Mon, 31 Jul 2023 16:36:42 +0000 iran 6565 at https://ccrma.stanford.edu Sound localization using a deep graph signal-processing model for acoustic imaging https://ccrma.stanford.edu/events/sound-localization-using-deep-graph-signal-processing-model-acoustic-imaging <div class="field field-type-date field-field-event-date"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Date:&nbsp;</div> <span class="date-display-single">Wed, 08/23/2023 - <span class="date-display-start">3:30pm</span><span class="date-display-separator"> - </span><span class="date-display-end">4:30pm</span></span> </div> </div> </div> <div class="field field-type-text field-field-event-type"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Event Type:&nbsp;</div> Guest Lecture </div> </div> </div> <div class="field field-type-filefield field-field-image"> <div class="field-items"> <div class="field-item odd"> <img class="imagefield imagefield-field_image" width="198" height="169" alt="" src="https://ccrma.stanford.edu/sites/default/files/user/iran/screenshot_from_2023-07-31_18-28-02.png?1690842494" /> </div> </div> </div> Adrian S. Roman (USC) joins us to discuss his ongoing project.<br /> <br /> ABSTRACT: <div><i>Our research explores ways to leverage the architecture of DeepWave, originally used as an acoustic&nbsp;camera, to enable precise localization of sound sources. While DeepWave inherently generates spherical&nbsp;maps in the form of sound intensity fields, it has not been utilized for determining precise localization&nbsp;coordinates of sound sources.<br /></i></div><div class="field field-type-text field-field-event-cost"> <div class="field-items"> <div class="field-item odd"> FREE </div> </div> </div> <div class="field field-type-text field-field-intended-audience"> <div class="field-items"> <div class="field-item odd"> Open to the Public </div> </div> </div> <p><a href="https://ccrma.stanford.edu/events/sound-localization-using-deep-graph-signal-processing-model-acoustic-imaging" target="_blank">read more</a></p> Mon, 31 Jul 2023 22:28:16 +0000 iran 6566 at https://ccrma.stanford.edu EXPLORING APPROACHES TO MULTI-TASK AUTOMATIC SYNTHESIZER PROGRAMMING https://ccrma.stanford.edu/events/exploring-approaches-multi-task-automatic-synthesizer-programming <div class="field field-type-date field-field-event-date"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Date:&nbsp;</div> <span class="date-display-single">Mon, 08/21/2023 - <span class="date-display-start">3:30pm</span><span class="date-display-separator"> - </span><span class="date-display-end">4:30pm</span></span> </div> </div> </div> <div class="field field-type-text field-field-location"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Location:&nbsp;</div> Classroom </div> </div> </div> <div class="field field-type-text field-field-event-type"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Event Type:&nbsp;</div> Guest Lecture </div> </div> </div> <div class="field field-type-filefield field-field-image"> <div class="field-items"> <div class="field-item odd"> <img class="imagefield imagefield-field_image" width="200" height="192" alt="" src="https://ccrma.stanford.edu/sites/default/files/user/iran/screenshot_from_2023-08-04_19-16-04.png?1691191028" /> </div> </div> </div> Daniel Faronbi (NYU) joins us to talk about his recent ICASSP paper<br /> <br /> Automatic Synthesizer Programming is the task of transform-<br /> ing an audio signal that was generated from a virtual instru-<br /> ment, into the parameters of a sound synthesizer that would<br /> generate this signal. In the past, this could only be done for<br /> one virtual instrument. In this paper, we expand the current<br /> literature by exploring approaches to automatic synthesizer<br /> programming for multiple virtual instruments. Two different<br /> approaches to multi-task automatic synthesizer programming<br /> are presented. We find that the joint-decoder approach per-<br /> forms best. We also evaluate the performance of this model<div class="field field-type-text field-field-event-cost"> <div class="field-items"> <div class="field-item odd"> FREE </div> </div> </div> <div class="field field-type-text field-field-intended-audience"> <div class="field-items"> <div class="field-item odd"> Open to the Public </div> </div> </div> <p><a href="https://ccrma.stanford.edu/events/exploring-approaches-multi-task-automatic-synthesizer-programming" target="_blank">read more</a></p> Fri, 04 Aug 2023 23:17:10 +0000 iran 6568 at https://ccrma.stanford.edu Retrieving musical information from neural data: how cognitive features enrich acoustic ones https://ccrma.stanford.edu/events/retrieving-musical-information-neural-data-how-cognitive-features-enrich-acoustic-ones <div class="field field-type-date field-field-event-date"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Date:&nbsp;</div> <span class="date-display-single">Fri, 08/18/2023 - <span class="date-display-start">3:30pm</span><span class="date-display-separator"> - </span><span class="date-display-end">4:30pm</span></span> </div> </div> </div> <div class="field field-type-text field-field-location"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Location:&nbsp;</div> Classroom </div> </div> </div> <div class="field field-type-text field-field-event-type"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Event Type:&nbsp;</div> Guest Lecture </div> </div> </div> <div class="field field-type-filefield field-field-image"> <div class="field-items"> <div class="field-item odd"> <img class="imagefield imagefield-field_image" width="200" height="178" alt="" src="https://ccrma.stanford.edu/sites/default/files/user/iran/screenshot_from_2023-08-17_13-27-26.png?1692293261" /> </div> </div> </div> Ellie Abrams (NYU) joins us to talk about her recent ISMIR paper<br /> <br /> Various features, from low-level acoustics, to higher-level<br /> statistical regularities, to memory associations, contribute<br /> to the experience of musical enjoyment and pleasure. Re-<br /> cent work suggests that musical surprisal, that is, the un-<br /> expectedness of a musical event given its context, may di-<br /> rectly predict listeners&rsquo; experiences of pleasure and enjoy-<br /> ment during music listening. Understanding how surprisal<br /> shapes listeners&rsquo; preferences for certain musical pieces has<br /> implications for music recommender systems, which are<br /> typically content- (both acoustic or semantic) or metadata-<div class="field field-type-text field-field-event-cost"> <div class="field-items"> <div class="field-item odd"> FREE </div> </div> </div> <div class="field field-type-text field-field-intended-audience"> <div class="field-items"> <div class="field-item odd"> Open to the Public </div> </div> </div> <p><a href="https://ccrma.stanford.edu/events/retrieving-musical-information-neural-data-how-cognitive-features-enrich-acoustic-ones" target="_blank">read more</a></p> Thu, 17 Aug 2023 17:27:56 +0000 iran 6572 at https://ccrma.stanford.edu Insights into Soundscape Synthesis and Energy consumption of Sound Event Detection systems https://ccrma.stanford.edu/events/insights-soundscape-synthesis-and-energy-consumption-of-sound-event-detection-systems <div class="field field-type-date field-field-event-date"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Date:&nbsp;</div> <span class="date-display-single">Thu, 08/17/2023 - <span class="date-display-start">10:00am</span><span class="date-display-separator"> - </span><span class="date-display-end">11:00am</span></span> </div> </div> </div> <div class="field field-type-text field-field-location"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Location:&nbsp;</div> Classroom </div> </div> </div> <div class="field field-type-text field-field-event-type"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Event Type:&nbsp;</div> Guest Lecture </div> </div> </div> <div class="field field-type-filefield field-field-image"> <div class="field-items"> <div class="field-item odd"> <img class="imagefield imagefield-field_image" width="150" height="147" alt="" src="https://ccrma.stanford.edu/sites/default/files/user/iran/screenshot_from_2023-08-06_00-30-18.png?1691296238" /> </div> </div> </div> Francesca Ronchini (Politecnico di Milano) joins us to discuss her PhD research:<br /><div class="field field-type-text field-field-event-cost"> <div class="field-items"> <div class="field-item odd"> FREE </div> </div> </div> <div class="field field-type-text field-field-intended-audience"> <div class="field-items"> <div class="field-item odd"> Open to the Public </div> </div> </div> <p><a href="https://ccrma.stanford.edu/events/insights-soundscape-synthesis-and-energy-consumption-of-sound-event-detection-systems" target="_blank">read more</a></p> Sun, 06 Aug 2023 04:28:27 +0000 iran 6569 at https://ccrma.stanford.edu The Sound of AI Accelerator https://ccrma.stanford.edu/events/sound-of-ai-accelerator <div class="field field-type-date field-field-event-date"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Date:&nbsp;</div> <span class="date-display-single">Wed, 08/16/2023 - <span class="date-display-start">11:00am</span><span class="date-display-separator"> - </span><span class="date-display-end">12:00pm</span></span> </div> </div> </div> <div class="field field-type-text field-field-location"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Location:&nbsp;</div> Classroom </div> </div> </div> <div class="field field-type-text field-field-event-type"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Event Type:&nbsp;</div> Guest Lecture </div> </div> </div> <div class="field field-type-filefield field-field-image"> <div class="field-items"> <div class="field-item odd"> <img class="imagefield imagefield-field_image" width="200" height="200" alt="" src="https://ccrma.stanford.edu/sites/default/files/user/iran/soundai.jpeg?1690815461" /> </div> </div> </div> <div><b>&quot;The Sound of AI Accelerator: From Idea to Music AI Startup&quot;</b></div> <div><i>Are you interested in starting a music AI company? In this talk, Valerio will introduce&nbsp;<a href="https://thesoundofai.com/accelerator.html" target="_blank" data-saferedirecturl="https://www.google.com/url?q=https://thesoundofai.com/accelerator.html&amp;source=gmail&amp;ust=1690901801723000&amp;usg=AOvVaw0jw9nY4QTFcGaNEyEx1FVy">The Sound of AI Accelerator</a>, the first startup accelerator focused on music, audio, and voice AI.</i></div><div class="field field-type-text field-field-event-cost"> <div class="field-items"> <div class="field-item odd"> FREE </div> </div> </div> <div class="field field-type-text field-field-intended-audience"> <div class="field-items"> <div class="field-item odd"> Open to the Public </div> </div> </div> <p><a href="https://ccrma.stanford.edu/events/sound-of-ai-accelerator" target="_blank">read more</a></p> Mon, 31 Jul 2023 14:57:42 +0000 iran 6564 at https://ccrma.stanford.edu Deep learning for symbolic music representations https://ccrma.stanford.edu/events/deep-learning-symbolic-music-representations <div class="field field-type-date field-field-event-date"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Date:&nbsp;</div> <span class="date-display-single">Tue, 08/15/2023 - <span class="date-display-start">3:30pm</span><span class="date-display-separator"> - </span><span class="date-display-end">4:30pm</span></span> </div> </div> </div> <div class="field field-type-text field-field-location"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Location:&nbsp;</div> Classroom </div> </div> </div> <div class="field field-type-text field-field-event-type"> <div class="field-items"> <div class="field-item odd"> <div class="field-label-inline-first"> Event Type:&nbsp;</div> Guest Lecture </div> </div> </div> <div class="field field-type-filefield field-field-image"> <div class="field-items"> <div class="field-item odd"> <img class="imagefield imagefield-field_image" width="200" height="150" alt="" src="https://ccrma.stanford.edu/sites/default/files/user/iran/screenshot_from_2023-08-07_01-46-22.png?1691387200" /> </div> </div> </div> N&eacute;stor N&aacute;poles L&oacute;pez (McGill) joins us to discuss his PhD research<br /> <br /> Abstract: The talk will discuss the specific challenges of symbolic music representations for deep learning, with a particular emphasis on harmony and tonal analysis (although the methods discussed are applicable to other domains too). Valuable resources will be provided, including access to symbolic music datasets, essential software libraries, effective workflows, and practical insights for symbolic music data manipulation. The talk will also briefly discuss popular papers on the topic, as well as N&eacute;stor's research.<br /><div class="field field-type-text field-field-event-cost"> <div class="field-items"> <div class="field-item odd"> FREE </div> </div> </div> <div class="field field-type-text field-field-intended-audience"> <div class="field-items"> <div class="field-item odd"> Open to the Public </div> </div> </div> <p><a href="https://ccrma.stanford.edu/events/deep-learning-symbolic-music-representations" target="_blank">read more</a></p> Mon, 07 Aug 2023 05:48:06 +0000 iran 6570 at https://ccrma.stanford.edu