Real-time Audiovisual Composition with RayTone
Overview:
Are you interested in experimenting with sound and visuals to create an original multimedia performance? This five-day workshop offers an introduction to real-time audiovisual composition using RayTone - a node-based sequencing environment designed to promote a playful workflow for transforming creative ideas into artistic content.
The workshop is hosted by authors of RayTone, Eito Murakami and John Burnett, both of whom are artists/engineers in the fields of computer music and graphics. We will also welcome Professor Ge Wang as a guest lecturer to introduce ChucK - a strongly-timed music programming language. With ChucK embedded in RayTone, users can customize behaviors of audio units that perform digital signal processing at run time. Additionally, RayTone allows users to load OpenGL Shading Language (GLSL) scripts for graphics programming such that audio and visual elements can be controlled simultaneously.
The workshop is structured into three categories: an introduction to RayTone sequencing, lectures on ChucK and shader programming, and activities to design original RayTone patches. In addition to learning from instructors, attendees will exchange knowledge and collaborate with each other to develop creative ideas. Upon completion of the workshop, attendees will learn to combine digital signal processing with graphics programming to perform real-time audiovisual composition. There will be an opportunity to present an original piece using RayTone on the last day of the workshop. Prior musical and programming experience may be helpful but not required.
More information can be found at https://www.raytone.app.
If you have any questions, please feel free to contact Eito at eitom@ccrma.stanford.edu
Requirements:
Please bring a laptop (Windows / MacOS) and a pair of headphones to the workshop. CCRMA has a few extra pairs of headphones for those who cannot bring their own.
Schedule
Day 1: Introduction to RayTone & Digital Signal Processing
• Self introduction
• RayTone example projects
• Digital Signal Processing (DSP) with ChucK
- discrete time sampling
- additive / subtractive synthesis
- filter design
Day 2: More DSP + Shader Programming
• More DSP with ChucK
- granular synthesis
- physical modeling
• Shader programming
- GLSL examples
Day 3: More Shaders + Graphics
• More shader programming
• Integrating webcam, images, and videos with shaders
• (Optional: Anatomy of RayTone Engine)
Day 4: Final project preparation
• Musical composition with MIDI controllers
• Communication with other software via OSC
• Expressive audiovisual integration
Day 5: Presentations
• More preparation time
• Presentations by participants!
About the instructors:
Eito Murakami
Eito Murakami is a Ph.D. student at Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University. He graduated from University of California San Diego with bachelor's degrees in Interdisciplinary Computing and the Arts Music (ICAM) and Political Science/International Relations. Eito is an electronic composer, performer, sound designer, and virtual reality developer. By combining his classical music training with proficiency in audio and graphics software, he creates digital interfaces and instruments that promote playful workflows for multimedia performances. At CCRMA, his research involves designing audio playback systems in virtual reality to process dynamic spatial reverb and multiplayer interactions.
John Burnett
John Burnett is a multimedia artist and audio researcher based in Los Angeles, California. Drawing from a background in (electro)acoustic music, digital signal processing, and computer science, they create technologically-augmented and reactive multimedia installation works as well as sound and projection design for dance and theatrical productions. They have contributed to research which has advanced the areas of geometric acoustics, hardware accelerated acoustic modeling, and immersive media. Their dissertation explores methods of multimodal interaction in immersive media such as using point clouds to inform spatialized granular synthesis systems and abstract algebraic methods of defining audiovisual node graph topologies.
As former members of the Sonic Arts Research and Development group at UC San Diego's Qualcomm Institute, Eito and John presented a virtual reality composition titled "Becoming" at ACM SIGGRAPH 2022 - Immersive Pavilion.