CCRMA Open House 2024
Thursday and Friday, May 16-17, 2024
Come see what we've been doing up at the Knoll!
Thursday: Join us for lectures, hands-on demonstrations, posters, installations, and musical performances of recent CCRMA research including sound synthesis, online music-making, data-driven research in music cognition, and a musical instrument petting zoo.
Friday: A look at CCRMA's history as we celebrate our 50th anniversary, with historical presentations, alumni "where are they now?" lightning talks, and a John Chowning & Friends concert.
Past CCRMA Open House websites: 2008, 2009, 2010, 2012, 2013, 2016, 2017, 2018, 2019, 2021 CCRMA World Update, 2022.
Facilities on display
(details below)
CCRMA Stage: music and lectures
Hands-on CCRMA history museum
Listening Room: multi-channel surround sound research
Max Lab: maker / physical computing / fabrication / digital musical instrument design
Neuromusic Lab: EEG, motion capture, brain science...
Studios D+E: Sound installations
Virtual Reality Design Lab: Research in virtual, augmented, and mixed reality
Schedule Overview
(details below)
Thursday Schedule
10am - 1pm
LECTURES (also on Zoom), INSTALLATIONS, POSTER/DEMOS, GAMES, OPEN LABS, VR EXPERIENCES, MUSICAL INSTRUMENT PETTING ZOO
1pm - 2pm
LUNCH BREAK
2pm - 4pm
LECTURES (also on Zoom), INSTALLATIONS, POSTER/DEMOS, GAMES, OPEN LABS, VR EXPERIENCES, MUSICAL INSTRUMENT PETTING ZOO
4pm - 6pm | in person and livestream
CONCERT
Friday Schedule
10:10-10:25: TAKAKO FUJIOKA “CCRMA Neuromusic laboratory: research into music and mind”
10:30-10:45: POPPY CRUM “On the Scene: A Rooted History of Auditory Scene Analysis at CCRMA”
10:50-11:05: MALCOLM SLANEY “Forty Years of the Hearing Seminar at CCRMA”
11:10-11:25: HONGCHAN CHOI “Web Music Technology: Past, Present, and Future”
11:30-11:45: GE WANG “A Brief History of Artful Design at CCRMA 2004-Present”
11:50am-12:05: NED AUGUSTENBORG “CCRMA Film History Project”
-BREAK-12:10-1
1-1:15: JONATHAN BERGER “Sound, Space, and the Aesthetics of the Sublime”
1:20-1:35: PAT SCANDALIS et al. “Sondius Physical Modeling Work at CCRMA 1993-1997”
1:45-1:55: JULIUS SMITH “Inventing Modern Sequence Models as a Music 320 Project”
2-2:15: MARINA BOSI “From Ears to Bytes: How Perceptual Audio Coding Transformed the Digital Music
Landscape”
2:30-4:30: RODRIGO SEGNINI et al. “CCRMA Career Paths: Lecture, Lightning Talks, and Panel Discussion”
7:30pm
Concert: John Chowning & Friends | in person and livestream
Program / List of Projects on Display
(This list is nearly complete)
In Coherence
CCRMA MAMST '24: Terry Feng, Josh Mitchell, Alex Han, Sami Wurm, Julia Yu, Senyuan Fan, Benny Zhang, Emily Kuo, Yiheng Dong, Soohyun Kim, Max Jardetzky, Victoria Litton, Tristan Peng, Eito Murakami
In Coherence is an audiovisual collage representing the incredibly varied musical interests and talents of CCRMA’s 2024 MA/MST cohort. Each artist completed a minute-long audiovisual piece, given only the last six seconds of a previous artist’s work for inspiration. While each artist’s gesture may appear incoherent at first glance, this sequential juxtaposition places them “in coherent” conversation with each other, a celebration of creative and interdisciplinary audiovisual expression.
Installation. Location: Lobby (116), Day/Time: Thursday and Friday .
Listening Room Demo: Exploring the Past with Virtual Acoustics and Virtual Reality
Jonathan Abel, Elliot K. Canfield-Dafilou, Hassan Estakhrian, Nima Farzaneh, Eito Murakami, Luna Valentin, Matthew Wright, Jonathan Berger
Demonstrations of our recent work recreating the sound of selected heritage sites. We highlight some of our projects at the intersection of music, architectural acoustics, and archaeology. These projects explore architectural acoustics in music composition and aural traditions in historic context. They represent collaborations with archaeologists and architecture historians to explore their hypotheses regarding the influence of sound and soundscape on rituals and the development of art in prehistoric and Archaic archaeological sites, as well as musicological considerations of music for sacred spaces in 17th century Italy.
Demo. Location: Listening Room (128), Day/Time: Thursday all day.
Leçons de ténèbres
Patricia Alessandrini, Riot Ensemble, released on HCR records
"Leçons de ténèbres" is the title track from the portrait CD of works for chamber ensemble and electronics recorded by Riot Ensemble for HCR records, released this year (in October 2023), and described as "supremely well-crafted" by Gramophone Magazine. The electronics in this piece are diffused through transducers placed in/on the instruments of the ensemble, and are derived from various settings of the Lamentations of Jeremiah, including compositions by Palestrina, Orlando di Lasso, Tallis, Zelenka, and Couperin (from whom the title is derived), using only the Hebrew letters which precede each verse. Materials for both the score and the electronics are derived from various performances and various settings of a given letter for each movement of the composition, with the exception of the first movement, which cycles rapidly through the alphabet. The idea behind this process is that the content of the text of each verse is expressed by the setting of the letter that precedes it. What remains are traces of expressivity, without the semantic content of the text.
Musical Performance. Location: Stage (317), Day/Time: Thursday Concert (4-5:30pm) piece 1/5.
CCRMA Film History Project
Ned Augustenborg
Filmmaker Ned Augustenborg will present a film preview highlighting recent interviews conducted by Matt Wright with friends and faculty of CCRMA.
Historical film preview. Location: Classroom (217), Day/Time: Friday 11:50am-12:05pm.
The DX7 (from "DON LEWIS and The Live Electronic Orchestra")
Ned Augustenborg
Film sequence about the Yamaha DX7 synthesizer.
Film excerpt. Location: Lobby (116), Day/Time: Thursday and Friday all day.
Concerto for Conductor and Orchestra
Constantin Basica
This is the documentation of the 2019 premiere during the George Enescu International Festival in Bucharest, Romania. Performed by Cristian Lupeș and the Sibiu Philharmonic Orchestra with Constantin Basica. Shot in the Neuromusic Laboratory at CCRMA with generous support from Prof. Takako Fujioka. Featuring: Constantin Basica, Alex Chechile, Michele Cheng, Hassan Estakhrian, Julie Herndon, Dave Kerr, Hans Kretz, Cristian Lupeș, Barbara Nerness, Michiko Theurer, Nette Worthey, and Matt Wright. Camera assistants: Dave Kerr and Simona Fitcal.
Musical Offering. Location: Stage (317), Day/Time: Thursday Concert (4-5:30pm) piece 5/5.
Sound, Space, and the Aesthetics of the Sublime
Jonathan Berger
Since its inception, research and creative work at CCRMA included focus on the simulation of sound in space. This brief talk presents a whirlwind review of some of the key contributions to the field and a glimpse at current and anticipated future work.
Historical lecture. Location: Classroom (217), Day/Time: Friday 1-1:15pm.
Sound, Space and Sensing the Unfathomable
Jonathan Berger, Nima Farzaneh, Eito Murikami, Luna Valentin
We report on our progress on an interdisciplinary grant funded by the Templeton Religion Trust aimed at studying the interactions between architecture and sound with particular emphasis on cultural heritage sites with ritual importance. Our work entails in-situ studies, creation of virtual acoustic models, and rendering auralizations in VR space.
Lecture / work in progress. Location: Stage (317), Day/Time: Thursday 11-11:10am.
YESBOT
Kiran Bhat, Samantha Liu
A mad scientist teaches a robot to perform a song. What could go wrong? This piece was written for the Stanford Laptop Orchestra and performed in the SLOrktastic Chamber Music 2024 concert.
Live Musical Performance. Location: Stage (317), Day/Time: Thursday Concert (4-5:30pm) piece 2/5.
Democratizing Networked Music Performance with a RL-based SD-WAN
Luca Borgianni, Chris Chafe
Networked Music Performance (NMP) has increased its spread in the musician community because of its capability to connect players who are not physically together. However, the NMP relies on stringent requirements, such as low latency, which can be challenging to achieve when musicians are located in different parts of the world. Despite the advancement of communication technologies, NMP guarantees a proper quality experience for only those with high-performance connections. We propose an architecture that aims to democratize NMP, extending its reach to remote areas and individuals lacking access to high-performance networks. In particular, we leverage the novel Software Defined Wide Area Network (SD-WAN) technology, allowing the integration of a Low-Earth Orbit (LEO) tunnel in the NMP system.
Poster/Demo. Location: Ballroom (216), Day/Time: Thursday .
From Ears to Bytes: How Perceptual Audio Coding Transformed the Digital Music Landscape
Marina Bosi
Have you ever wondered how your MP3 files can pack so much sound into such a small size? Or what sets AAC apart from MP3? The development of perceptual audio coding technologies was a game-changer, enabling the launch of portable music devices and the ubiquity of these technologies in our daily lives - from mobile devices and broadcasting to electronic music distribution. But what made this possible, and where is the technology headed? In this exploration, Dr. Bosi will delve into the significant advancements in audio coding over the past years, and how they have presented new challenges and opened new opportunities.
Historical lecture. Location: Classroom (217), Day/Time: Friday 2-2:15pm.
Web Music Technology: Past, Present, and Future
Hongchan Choi
Web Music Technology equips developers with the tools to create innovative web-based audio and music applications. This versatile technology enables a wide range of functionalities, including audio processing and synthesis, capture, playback, analysis, and seamless integration with other browser features. The applications of Web Music Technology span diverse fields and various online learning platforms leveraging its capabilities. Notably, it boasts advantages such as broad accessibility, open standards, thriving developer communities, and tangible real-world impact. Looking ahead, the future of Web Music Technology promises even more potent web APIs, on-device machine learning integration, and transformative advancements in music education.
Historical lecture. Location: Classroom (217), Day/Time: Friday 11:10-11:25am.
On the Scene: A Rooted History of Auditory Scene Analysis at CCRMA
Poppy Crum
In the late 80's Albert Bregman came to CCRMA on a sabbatical to work with John Chowning, John Pierce, and other faculty and students during development of his 1990 book, "Auditory Scene Analysis." Through experimental data and Gestalt principles, it described an elegant theoretical framework for the organizational transformation of incoming acoustic sounds into actionable perceptual elements. The initial work has influenced extensive behavioral, neurophysiological, and computational research during the past 35 years that have provided significant contributions to our understanding of how we hear music, speech, and other rich acoustic environments. The ASA framework has also influenced signal processing approaches used in numerous products. Many individuals in the CCRMA community and collective have been leaders in this history including Diana Deutsch and Pierre Divenyi. CCRMA has also had students such as Nick Bryan working with Julius Smith and other CCRMA faculty lead modern approaches to computationally disentangling the acoustic scene. I'll share a short history of Auditory Scene Analysis and the rich connections to CCRMA: past, present, and future.
Historical lecture. Location: Classroom (217), Day/Time: Friday 10:30-10:45am.
Serinette pour les oiseaux de bourbe (2024)
Paul DeMarinis
size: variable
materials: birdcage, glass jars, mud, electronics, radios
Sound Installation. Location: Stairwell B, Day/Time: Friday all day.
CCRMA Studio Tour and Demo Reel of CCRMA Recordings
Hassan Estakhrian, Music 192 Students, CCRMA grad students
Drop-in studio tours and music listening.
Installation. Location: Recording Studio Control Room (127), Day/Time: Thursday all day.
Open Jam Session in CAVIAR
Hassan Estakhrian, Jonathan Abel, Jonathan Berger, Eoin Callery, Elliot Kermit Canfield-Dafilou, Nima Farzaneh, Eito Murakami, Travis Skare, Luna Valentin, Matthew Wright
Jam and make noise in the live room with CCRMA's virtual acoustics system: CAVIAR ("Chamber for Augmented/Virtual/Interactive Audio Realities"). Teleport to historical and synthesized environments and musically interact with the different acoustics. Bring your instrument, use your voice, or play one of the instruments that will be set up. Those who just want to listen are also welcome.
Installation. Location: Recording Studio Live Room (124), Day/Time: Thursday all day.
Exploring Neural Audio Coding Methods
Senyuan Fan, Marina Bosi
Audio coding can be implemented through concise neural codes employing end-to-end neural networks. While this method has shown promise in achieving high compression ratios, the reconstructed audio quality frequently suffers. Implicit neural representations (INRs) have demonstrated remarkable efficiency in modeling various complex signals, spanning from radiance fields and 3D shapes to images, videos, and audio. By employing periodic activation functions, fully connected multilayer perceptrons (MLPs) can proficiently model audio signals. In contrast to end-to-end neural audio codecs, INRs do not necessitate extensive training data and show notably faster decoding speeds. In this presentation we explore INRs-based audio coding with heightened perceptual accuracy.
Lecture. Location: Stage (317), Day/Time: Thursday 2:25-2:25pm.
Sonic Pinwheel
Terry Feng
Sonic Pinwheel is a shared musical installation of web-based pinwheel instruments running on mobile/desktop devices. Bring in your phone as a musical instrument, blow into the microphone and watch as pinwheels spin together in melodious harmony. Create ambient music together and bathe in a sound space of sonic pinwheels, soothings wind, and sparkly chimes.
Sonic Pinwheel: https://ccrma.stanford.edu/~tzfeng/pinwheel
Installation. Location: Studio D (221), Day/Time: Thursday and Friday all day.
WebChucK IDE
Terry Feng
WebChucK IDE is a web-based integrated development environment for writing and running ChucK code. WebChucK IDE provides tools and workflows for developing and running ChucK on-the-fly and in any web browser, on desktop and mobile devices. This environment integrates ChucK development with visualization and code-based generative web UI elements to offer an accessible and playful way to program computer music
Poster/Demo. Location: Ballroom (216), Day/Time: Thursday all day.
Mouse Drumset
Terry Feng
A 2 hand-2 mouse USB peripheral drum set built in ChucK. Learn to play and perform sample based drums in a new and exciting way!
Pettable Musical Instrument. Location: Max Lab (201), Day/Time: Thursday .
Dysfunctionals
Pedro González Fernández
This demo features a group of cheap cardboard robots resembling animals, each adorned with hair and visible circuit parts. Confined in a small space, the robots move and interact with each other, creating an unpredictable and dynamic sounding environment.
Dysfunctionals invites reflection on the interplay between technology and nature, chaos and control, functionality and dysfunction. It questions the boundaries of what it means to be alive in a world increasingly dominated by artificial optimization. A choir of glitches speaking to the beauty of imperfection.
Work in progress. Location: Patio (outside 313), Day/Time: Thursday .
CCRMA Neuromusic Laboratory: Research Into Music and Mind
Takako Fujioka
My research in Neuromusic laboratory at CCRMA continues to explore human musical ability with brainwaves and behavioural observations. Our topics include: perceptual processes regarding musical rhythms, harmonies and rhymes, brain recovery from stroke-related disability, tactile-stimulation for cochlear implant listening, improvisation, ensemble synchrony, audio-visual sensory integration and music-culture differences. Moving from Japan to Canada to US, and moving across Engineering, Neuroscience, and Music, I am aware of the value of interdisciplinary approaches and open-mindness. My overarching goal is to continue bridging between humanity and human-centered neuroscience and health research around audition and music. The talk reflects on 10-year research at CCRMA and discusses the future direction.
Lecture. Location: Classroom (217), Day/Time: Friday 10:10-10:25am.
Stimuli Jukebox
Takako Fujioka, Alex Han, Barbara Nerness, Vidya Rangasayee, Spark Wu, Julia Yu, Benny Zhang, Marise van Zyl
Many experiments happen in the Neuromusic Lab each year, each involving some kind of musical stimuli (sounds that the subjects listen to) and/or musical task (music that the experiment asks the subjects to perform). Throughout the day we will play an assortment of such sounds, to give some of the sonic flavor of the experiments that take place here.
Installation. Location: Neuromusic Lab (103), Day/Time: Thursday all day.
Experiment Demonstrations in Neuromusic Laboratoy
Takako Fujioka, Yiheng Dong ,Alex Han, Richard Lee, Marise van Zyl
Experiment demonstrations include: Drum Duet Improvisation by Alex Han (Thursday morning 10am-12pm), Audio-Tactile Melody Listening for Cochlear Implant listeners by Richard Lee (Friday morning 10am-12pm), VR piano playing with Different Acoustics Environment by Marise van Zyl (throughout Thursday & Friday), and Audio-visual sensory integration EEG experiment paradigm by Yiheng Dong (Thursday afternoon 2pm-4pm). Includes task demonstrations and open discussions with investigators on our preliminary results.
Demo. Location: Neuromusic Lab (103), Day/Time: Thursday all day.
Complex Emotional Processing in Music: An ERP Study
Anna Gruzas, Alina Davison, Takako Fujioka
Prior work has established music’s ability to convey a semantic context when paired with another stimulus, and thereby convey emotional meaning. Work investigating this ability of music to convey emotional content has mainly focused on “basic” emotions such as “happiness” or “sadness.” Using the N400 ERP component, which is commonly used for assessing processing of meaning in language, the present study aims to determine whether the same N400 response can be evoked from incongruence of emotions of various intensity levels, either of opposing valence or within one valence.
Poster/Demo. Location: Neuromusic Lab (103), Day/Time: Thursday .
"Point Line Piano". Transforming Intermedia Expression and Engagement in VR.
Jarosław Kapuściński
Point Line Piano is a project at the intersection of music and virtual reality, offering an immersive experience that redefines the composition, performance, and reception of piano music. Participants draw lines in VR that generate music, melding auditory, visual, and kinesthetic elements, and witness a novel paradigm of spatial and full-body abstraction. This presentation will explore VR's potential to transform artistic expression and audience engagement in intermedia.
Lecture. Location: Stage (317), Day/Time: Thursday 10-10:10am.
Point Line Piano
Jaroslaw Kapuściński, OpenEndedGroup (Marc Downie, Paul Kaiser) with Eito Murakami
Point Line Piano is a VR project that reimagines the composition, performance, and reception of piano music by fusing its modes of creating, playing, and listening. As you interact with it, your ears, eyes, and hands act in concert. You start by stroking lines freely in the space around you, sparking musical notes that are notched as points on the lines as you draw them. These notes quickly accumulate, forming distinct melodic phrases and rhythms, while the computer generates an intricate audiovisual dance all around you. The work enables a spatial and full-body experience of abstraction not found in any other medium. Sign up for a time slot by following this link.
VR Experience. Location: Studio J (106), Day/Time: Thursday and Friday by signup.
SVOrk (Stanford VR Orchestra)
Kunwoo Kim, Andrew Zhu, Eito Murakami, Marise van Zyl, Max Jardetzky, Yikai Li, Ge Wang
This lecture chronicles the conception of SVOrk (Stanford VR Orchestra), a novel computer music ensemble, where both performers and audience coexist in a fully-immersive virtual concert space. With 5 performers and 15 audience members each wearing a VR headset, SVOrk presents a series of live-networked audiovisual performances, where dandelions are musical instruments and a cyber city is the stage – an experience exclusive to the medium of VR. In addition, SVOrk reimagines the concert-going experience in VR, exploring the audience’s virtual identity, new forms of expressive communication, and social engagement. SVOrk’s premiere concert is planned to occur at CCRMA on June 1st and 2nd, 2024.
Lecture. Location: Stage (317), Day/Time: Thursday 10:15-10:25am.
VVRMA (VR Field Trip to CCRMA)
Kunwoo Kim, Ge Wang
Project VVRMA is an interactive, audiovisual, fully immersive VR field trip to a virtual reimagining of CCRMA. Envisioned as a “VR interactive museum for computer music”, VVRMA is the name of the virtual centerpiece building, meticulously modeled after CCRMA’s physical architecture. Within VVRMA, visitors explore various Zones of Interest (ZOIs), which are collections of immersive experiences around different themes that mirror various research labs at CCRMA. Currently, there are two ZOIs: 1)“From Sound to Brain” (a boat ride into the ear canal to learn about the science of hearing), and 2) “The World of Artful Design” (a musical city for exploring interactive audiovisual design and humanistic implications of technology). VVRMA aims to create a playful, expressive, and immersive learning space accessible to a general audience, who are curious about music, technology, and the medium of VR.
Poster/Demo. Location: VR Lab (107), Day/Time: Thursday .
SVOrk (Stanford VR Orchestra)
Kunwoo Kim, Andrew Zhu, Eito Murakami, Marise van Zyl, Max Jardetzky, Yikai Li, Ge Wang
This poster/demo session presents a demo video of SVOrk (Stanford VR Orchestra), a novel computer music ensemble, where both performers and audience coexist in a fully-immersive virtual concert space. With 5 performers and 15 audience members each wearing a VR headset, SVOrk presents a series of live-networked audiovisual performances, where dandelions are musical instruments and a cyber city is the stage – an experience exclusive to the medium of VR. In addition, SVOrk reimagines the concert-going experience in VR, exploring the audience’s virtual identity, new forms of expressive communication, and social engagement. SVOrk’s premiere concert is planned to occur at CCRMA on June 1st and 2nd, 2024.
Poster/Demo. Location: VR Lab (107), Day/Time: Thursday .
ROI - Resonant Object Interface
Sasha Leitman
The ROI is a sensing methodology built from resonant objects, vibration exciters and contact microphones. Objects are excited with vibration transducers; as the user touches the object, they alter the magnitude of higher harmonic frequencies; those magnitudes are measured and turned into control data that can be used in music software. It is an acoustic input system into a digital musical environment and it engages with many of the issues that make transducers so compelling - materiality, embodiment, liveness, nuance, tangibility.
Pettable Musical Instrument. Location: Max Lab (201), Day/Time: Thursday all day.
El CelloSaurio
Fernando Lopez-Lezcano, Chris Chafe
The material for this piece starts with four disembodied cello strings at the cardinal points of a compass that lives in a world with a wildly oscillating magnetic field. They are not alone and share the space with electronic sounds generated by "Carlitos", one member of a small herd of "Dinosaur" synths - hence the name of the piece. “Carlitos” is a small eurorack modular synthesizer that not only contributes its own sonic components to the soundscape, but also processes the celleto strings through several of its modules. A custom SuperCollider program can also remember and loop both performers, and provides the 3d realtime spatialization of the final result.
Live Musical Performance. Location: Stage (317), Day/Time: Thursday Concert (4-5:30pm) piece 4/5.
Joy-Centered Accessible Design: Updates on Closed-Captioning, Musical Haptics, and Cochlear Implant Music Research
Lloyd May
Assistive technologies, such as hearing aids or closed-captions, provide access to information and media for millions of people. However, by focusing nearly solely on legal compliance, we've designed opportunities for customization and personalized joy out of these devices. In this research update I'll be presenting my latest work in (1) co-creating haptic instruments with D/deaf and hard-of-hearing artists, (2) creating a customizable closed-captioning format that allows users to create a bespoke captioned experience, and (3) a music personalization platform designed with cochlear implant users to create bespoke mixes of recorded music.
Demo. Location: 3rd floor work area (305), Day/Time: Thursday 2-4pm.
BYOx: Open Workshop
Mike Mulshine, Celeste Betancur
BYOx (bring your own ) is a co-creative installation building space that encourages a broad range of expressions and contributions from anyone who chooses to participate. The experience can be thought of as a jam session; but, in this case, on top of possibly making music together, we will be building stuff (CCRMA-type interactive computer-musical things, art installations, art, or whatever you'd like). The building of the installation experience will be loosely guided by Mike and Celeste, who will also be hacking and making things, successively adding to the space. There will be materials, instruments, helpful people around, for you to do the same.
Communal co-creative experience. Location: Lounge (313), Day/Time: Thursday all day.
RayTone: A Node-based Audiovisual Sequencing Environment
Eito Murakami, John Burnett
RayTone is a freely available software environment for creating audiovisual compositions. The software emphasizes the aesthetics and joy of patching procedures, aiming to promote a playful workflow for transforming creative ideas into artistic content. RayTone exposes native access to ChucK music programming language and OpenGL Shading Language (GLSL), encapsulating programming of arbitrary complexity inside each unit (node) on canvas. The ability to sequentially connect units as well as to live script functionalities of each unit makes RayTone suitable for an audience of widely varying skill levels and an entry point to digital signal processing and shader programming.
Demo. Location: 3rd floor end of hallway (302), Day/Time: Thursday 11:30am-1pm, 2-4pm.
take shelter under the umbrella. (please)
Daiki Nakajima, Nolan Miranda
making it rain in studio e. can you fit two people? come relax...
Installation. Location: Studio E (320), Day/Time: Thursday and Friday all day.
Creativity and engagement during turn-taking piano duet improvisation as indexed by alpha oscillations: joint action influences creative thinking in real-time
Barbara Nerness, Barbara Nerness, Noah Fram, Kunwoo Kim, Aditya Chander, Cara Turnbull, Elena Georgieva, Sebastian James, Matthew Wright, Takako Fujioka
Musical improvisation requires a complex organization of brain functions in order to generate musical ideas, translate them into actions, and integrate auditory and tactile feedforward/feedback for future planning. In addition, ensemble performance involves real-time coordination using a joint-action scheme. This study investigates how alpha oscillations index creative ideation and attention to one's partner during a turn-taking duet improvisation. Simultaneous EEG were recorded from two pianists while they alternated playing a scored or improvised melody, for a total of 4 phrases. Alpha power for phrases 2 and 3 was analyzed depending on whether a pianist was a starting or joining player. We focused on these two phrases since the task for each partner was identical in the preceding and subsequent phrase. Prior to playing, both partners showed a larger alpha ERD for improvisation than score, reflecting additional cognitive processes for preparation for the former. Furthermore, when listening to one's partner, alpha ERS occurred only if the partner played the score, indicating less attention paid to the partner's actions. Interestingly, when the joiner listened to the first phrase played by the starter, alpha ERD was significantly stronger than when they listened to the second phrase played by the starter, indicating higher engagement of the joiner, perhaps because they must fit their part to the new melodic context set by the starter. Our results suggest that the source of musical content (improvisation vs score) and a player's role in the musical structure both affect attentional engagement between duet partners.
Poster/Demo. Location: Neuromusic Lab (103), Day/Time: Thursday .
SIREN: Sonification Interface for REmapping Nature
Tristan Peng, Hongchan Choi, Jonathan Berger
SIREN is a flexible, extensible, and customizable web-based general-purpose interface for auditory data display (sonification). With plug-and-play functionality, and numerous methods to customize and create meaningful auditory display, SIREN provides useful features for pedagogy, methods for exploratory data sonification, and an extensible, open-ended development platform. Inspired by common digital audio workstation (DAW) workflows, SIREN provides a familiar and intuitive layout for a variety of sonification use cases.
Poster/Demo. Location: Ballroom (216), Day/Time: Thursday 11am-1pm, 2-4pm.
Laziness: A Virtue Made Easier by AI
Nick Porcaro
Since ChatGPT burst onto the scene in late 2022, I've enlisted its assistance for a range of programming tasks, experiencing a diverse array of outcomes. This talk will showcase instances where ChatGPT performed with astonishing efficiency, alongside moments when it hilariously met its match against programming challenges, effectively hitting a brick wall. I'll share insights into its capabilities, its limitations, and the amusing unpredictability of relying on AI. Ultimately, it’s clear that while ChatGPT can serve as a valuable co-pilot, steering successfully through the complexities of code still requires a seasoned expert at the wheel.
Lecture/Demo. Location: Stage (317), Day/Time: Thursday 2:30-2:40pm.
Effects of Musical Training and Melodic Motifs on Raga Discrimination and Identification
Vidya Rangasayee, Takako Fujioka
While listening to Carnatic Music, listeners engage in raga identification. A raga is defined by a sequence of notes and may contain any number of notes out of 12 semitones in an octave. Specific phrases and/or inflections of certain notes, called motifs, are characteristic of a raga. We aim to assess if untrained listeners of Carnatic Music can discriminate between ragas as accurately as trained listeners. It also attempts to understand if the presence of motifs improves accuracy in discrimination and identification of ragas. The results show that trained musicians learn use motifs to identify ragas. Similar results on the discrimination task, between trained and untrained listeners indicate implicit learning of raga from motifs. Future work will study specific effects of motifs on the perception and cognition of ragas.
Poster/Demo. Location: Neuromusic Lab (103), Day/Time: Thursday 10am-1pm.
Sondius Physical Modeling Work at CCRMA 1993-1997
Pat Scandalis, Nick Porcaro, Julius Smith
Between 1993 and 1997 work was done at CCRMA to create a full set of musical instrument physical models known as "Sondius". This effort was primarily on NeXT machines using a custom DSP/Control Editor called SynthBuilder. Computation was done using the Motorola 56k DSP as well as a custom 8 blade DSP engine built at CCRMA. This work is covered in Andrew Nelson's book "The Sound of Innovation". Historical photos and sound examples from this period of time will be shown.
Historical lecture. Location: Classroom (217), Day/Time: Friday 1:20-1:35pm.
MIDI 2 and the MPE Profile for Instrument Creators
Pat Scandalis
MIDI 2 was ratified by the MIDI Manufacturers Association in 2020. MIDI 2 has been implemented in Linux, MacOS, iOS, Android, and for Windows later in 2024. In March 2024, the MIDI 2 Profile for MPE (MIDI Polyphonic Expression, MIDI 3D Expression) was ratified. The MIDI 2 Profile for MPE is a bridge from MIDI 1 to MIDI 2, and is unique because it can be implemented with either MIDI 1 or MIDI 2 messages. Instrument creators who create MIDI 1 MPE controllers/synths can be fully MIDI 2 compliant by simply implementing profile negotiation.
The presentation is a brief overview of MPE and MIDI 2 aimed at instrument creators.
Lecture. Location: Stage (317), Day/Time: Thursday 10:45-10:55am.
CCRMA Career Paths
Rodrigo Segnini, Matthew Wright, many alumni
CCRMA alumni often go on to become technical leaders in audio/Internet startups or major companies. They also become scholars in academia, securing professorships and other positions at institutions with similar programs. Additionally, they may pursue careers as researchers or freelance artists at intersections of Art and Science. In this session, we strive to present a sample of the various endeavors of CCRMA-lites, which we define to include faculty, visiting scholars, researchers, industry affiliates, or generally anyone partisan of the activities pursued at The Knoll, as to develop an imprint of CCRMA’s aggregate impact on society.
Part 1: aggregate data presentation.
Part 2: alumni lightning talks: what did they learn here, where did they go, what do they do...?
Part 3: panel discussion.
Lecture, Lightning Talks, and Panel Discussion. Location: Classroom (217), Day/Time: Friday 2:30-4:30pm.
Physically Modeled, GPU Accelerated Drum Synthesizer
Travis Skare
Drum synthesis of cymbals and shells, utilizing a mid-range commodity GPU to obtain real-time performance of the ten+ individual kit pieces. Models of a few drum kit brands and over a dozen cymbals may be selected on the interface, and these may be played on the attached pads.
Attendees are invited to stop by and try this out; instructions will be posted next to the instrument.
Pettable Musical Instrument. Location: Max Lab (201), Day/Time: Thursday 10am-1pm, 2-3pm.
Forty Years of the Hearing Seminar at CCRMA
Malcolm Slaney
The Hearing Seminar has been a fixture at CCRMA for 40 years. Started by Prof. Earl Schubert and Bernard Mont-Reynaud, the seminar has fostered a community of scientific researchers and industrial practitioners interested in all aspects of auditory perception. Over these decades, around a large table at CCRMA, we have had vigorous discussions on neurophysiology, psychology, audiology, engineering and of course music. I'll discuss the notable science and technology we have explored, as well as note successful network connections.
Historical lecture. Location: Classroom (217), Day/Time: Friday 10:50-11:05am.
Inventing Modern Sequence Models as a Music 320 Project
Julius Smith
Today's sequence models (such as large language models) in machine learning (AI) arose from a blend of principle-based design and empirical discovery, spanning several fields. This talk describes how the ideas could have emerged from an elementary signal-processing approach. This viewpoint offers some features:
(1) Signal processing folks can quickly learn what is happening in a motivated way
(2) Machine-learning experts might benefit from signal-processing insights
(3) Obvious suggestions for things to try next naturally arise
Historical lecture / open class meeting. Location: Classroom (217), Day/Time: Friday 1:40-1:55pm.
snapshots
Rochelle Tham, Renzo Ledesma
In our everyday lives, what moments do you consider special? At which points in time do you wish to hold on to, record, and relive? What thoughts and emotions stir you to latch on to particular moments in your life? snapshots is an experimental work that explores and interprets these special moments in our daily lives through sound.
Each movement was inspired by a photograph or short video capturing a moment from our everyday life. Using sounds originating from mundane, daily-use objects and environments, as well as audio processing through SuperCollider, we composed movements that relate to our visual interpretation, emotional recollection, and any other feelings that are invoked by the chosen media.
Musical Offering. Location: Stage (317), Day/Time: Thursday Concert (4-5:30pm) piece 3/5.
Changes in room acoustics engage spatial sound processing: an MMN study
Marise van Zyl, Takako Fujioka
Previous research has shown that spatial information in sound is processed in the brain preattentively. Specifically, stimulus changes in room acoustics (with a different reverberation) or sound source location (with a different binaural cue) elicit mismatch negativity (MMN). However, it is unclear how these two auditory-feature processes interact with each other, if at all. In this study, we examine MMN and P3a by using an oddball paradigm with complex tones as standard stimuli, originating from a central location and within a given reverberation condition - either big or small. Three deviant types - reverb, location (60 degrees right), and both reverb and location - were presented randomly each at 7.5%. We also used complex tones with either 300 Hz or 2000 Hz fundamental frequency to separate interaural time difference (ITD) and interaural level difference (ILD) binaural cues. We tested two alternative hypotheses: (1) reverberation and sound location are processed independently, predicting the summation of responses for reverb and location deviants would be similar to the response to the double deviant, (2)Alternatively, reverberation and location processing could interact with each other, predicting that the double deviant response is significantly different from two single deviant summation. Results followed the second hypothesis for MMN amplitude. Furthermore, ANOVA showed a significant interaction between frequency and deviant types because of stronger negativity for 300Hz than 2000 Hz. No interactions were found for P3a. Our data suggest that a reverberation change with binaural cue is processed quite similarly to the binaural cue itself at the preattentive level, indicating the integration of the room acoustic information into spatial object separation.
Poster/Demo. Location: Neuromusic Lab (103), Day/Time: Thursday .
Factors in Perceived Rhythmic Complexity
Leigh VanHandel
Research on rhythmic complexity has often used syncopation as a primary factor in or as a proxy for perceived complexity. This study investigates the role of other rhythmic characteristics on perceived complexity, including tempo, metrical context, density (the number of onsets in a given rhythmic pattern), durational variability, and syncopation. This study expands on prior research by including stimuli presented at six tempi in order to study a wider range of tempi. Results support the earlier findings; perceived complexity was rated higher for faster tempi overall, stimuli presented in a metrical context were rated as well complex than those presented without a metrical context, and both density and syncopation are positively correlated overall with perceived complexity, but are highly tempo- and context-dependent, indicating the importance of considering multiple factors in perceived complexity and how those factors interact with tempo.
Lecture. Location: Stage (317), Day/Time: Thursday 2:45-2:55pm.
ChucK Programming Language in 2024
Ge Wang
Since its inception in early 2000s, the ChucK music programming language has undergone many expansions and changes. In the early years, there was a flurry of contributions that are still in use today. During the 2010s, there was a notable decrease in ChucK development (despite a few dedicated individuals who kept the language on life support). Recently, however, centered at CCRMA, ChucK development has experienced something of a resurrection. This paper highlights the major initiatives since 2018, including new core language features, ChuGL (graphics), ChAI (AI), Chunity (ChucK in Unity), Chunreal (ChucK in Unreal Engine), WebChucK (ChucK in browsers), and further extensions to the language through Chugins (ChucK plugins). Furthermore, we will highlight future directions of ChucK development.
Lecture. Location: Stage (317), Day/Time: Thursday 10:30-10:40am.
A Brief History of Artful Design at CCRMA 2004-Present
Ge Wang
Why do we make the things we make? This presentation chronicles an evolution of artful tool-building at CCRMA (and beyond) and tells a brief history of ChucK, instrument design for laptop orchestra, mobile music, virtual reality design, and interactive AI. It is a reflection on computer music design research and why we do it -- as well as on the importance of teaching "critical making" courses such as Music 256a ("Music, Computing, Design") and Music 356 ("Music and AI: A Critical-Making Course").
Historical lecture. Location: Classroom (217), Day/Time: Friday 11:30-11:45am.
Time-Varying Evenly and Oddly Stacked TDAC Transforms
Ryan Wixen, Marina Bosi
Time domain aliasing cancellation (TDAC) transforms are ubiquitously used for time-frequency mapping in audio coding. The evenly stacked TDAC (ETDAC) transform shares several important properties with the well-studied oddly stacked TDAC (OTDAC) transform. Firstly, the ETDAC transform can be expressed in terms of modified discrete cosine and sine transforms. Secondly, we prove that by rearranging its outputs, the ETDAC transform can be manipulated into a filter bank. Lastly, we show that the ETDAC can be extended like the OTDAC, requiring the same window conditions for perfect reconstruction. We also present a new approach to block switching for the extended TDAC transforms.
Lecture. Location: Stage (317), Day/Time: Thursday 2-2:10pm.
Listeners Detect Deviant Beats Better in Musical Rhythm Contexts with Fewer Subdivision Levels: an MMN and Behavioral Study
Julia Yu, Aditi Tuli, Naomi Shi Yang Gong, Takako Fujioka
Rhythm perception requires listeners to identify a beat structure from an ongoing context. Typically, fewer subdivisions in the rhythm allows easier extraction via sequential processing, whereas increased levels of subdivisions require hierarchical processing.
We hypothesized that different subdivision levels would influence listeners’ ability to detect deviant beats. This would be reflected in higher behavioral performance as well as a larger amplitude of mismatch negativity (MMN).
We recorded EEG while participants passively listened to a variety of rhythms. Each 2/4 rhythm contained a prime part and subsequent steady beats. The four different primes consisted of; (1) two quarter notes, (2) two 8th and one quarter note, (3) four 8th notes, and (4) a dotted 8th and a 16th note pattern twice repeated. The subsequent pattern was always the same, consisting of three quarter notes in standard trials, or the final note occurring an 8th or 16th note earlier in deviant trials. Afterwards, participants also determined whether two presented rhythms were the same or different.
The MMN was primarily evident in the frontocentral electrodes. Deviant 8th elicited a significantly larger MMN than deviant 16th across all primes. Behaviorally, listeners also had more difficulty detecting the smaller temporal deviation of 16th compared to the 8th. When primes contained more subdivisions, behavioral accuracy significantly decreased, indicating a Prime x Deviant interaction. These results support our hypothesis that different subdivision levels affect one’s ability to extract beats. Our findings point to the interplay between sequential and hierarchical processing in extracting beat structure in auditory rhythms.
Poster/Demo. Location: Neuromusic Lab (103), Day/Time: Thursday .