CCRMA World Update
Even in a fragmented state caused by the pandemic restrictions and by our building being mostly closed, CCRMA has been busy with exciting new projects. So instead of waiting until next year’s in-person Open House, we are offering this special event to bring you updates on our work. We are also hoping that this format will give people outside of the Bay Area the opportunity to interact with our community.
Most of the event will take place in Zoom and registration is required at this link.
Please check this page for continuously updated details.
The event is of course free!
SCHEDULE OVERVIEW
(click to enlarge or download)
All times PDT
Il y a plus d’eau que prévu sur la lune
for contrabass flute, voice and electronics
Patricia Alessandrini, Keiko Murakami (contrabass flute, voice)
Il y a plus d’eau que prévu sur la lune is a radiophonic piece for contrabass flute, voice and electronics, commissioned for flutist Keiko Murakami by the Alla Breve program of France Musique, produced and curated by Anne Montaron, in collaboration with Françoise Cordey and Soizic Noël. It was realised in the studios of the Groupe de Recherches Musicales (GRM) in Paris in December 2020, and received its broadcast première on France Musique in February 2021. The piece was composed in close collaboration with Keiko Murakami, through a series of remote and 'distanced' rehearsals in Summer and Fall 2020.
Musical offering. Time: 10:50am PDT. Location: Zoom main room. Piece website
MUSIC 192A: Foundations of Sound-Recording Technology | Fall 2020 Projects
Constantin Basica (Instructor), Hanyu Qu (TA) + MUSIC 192A Students
In Fall 2020, MUSIC 192A served as an introduction to recording technologies and practices in a studio, at home, and in the field. The facilities available at CCRMA provided a basis for learning studio operation, as well as a space for recording projects remotely. The course addressed various audio engineering topics: room acoustics, analog and digital recording, microphone selection and placement, audio editing and mixing, audio effects processing (equalization, compression, convolution reverb, etc.), and sound design.
The students did not have physical access to the Recording Studio because of the closure of CCRMA’s building (The Knoll) due to coronavirus pandemic restrictions. But they were able to remotely record the Disklavier in the Studio with the Audiomovers Listento plug-in and the help of the instructor physically moving microphones around according to their instructions. The students also received recording kits (including headphones, audio interface, microphones, portable audio recorder, mic cables and stands, and acoustic isolation tools) to complete their assignments. The course projects were recorded by students using those kits in their homes and surroundings. They are available for listening at the link below.
Website. Presenter: Constantin Basica (http://www.constantinbasica.com)
Ocean Memory Concert
Jonathan Berger, Chris Chafe, Hongchan Choi, Tim Weaver, Robertina Sebjanic, Anya Yermakova, Heather Spence, others
Sound plays a significant role in formulating and preserving memory through its ability to communicate non-verbally with meaning and affect. The inherent temporal nature of sound allows for the representation of processes on multiple time scales.
This concert is the culmination of a workshop ( https://ccrma.stanford.edu/~brg/soniOM/ ) featuring works by CCRMA and other composers and scientists.
Concert. Time: 6:00pm PDT. Location: Zoom main room and CCRMA LIVE. Recording will be available at CCRMA's Vimeo
Sound, Space and the Aesthetics of the Sublime
Jonathan Berger, Jonathan Abel, Eoin Callery, Elliot Canfield Dafilou, Lloyd May, Camille Noufi, Marise van Zyl
A brief project progress report on current and planned research in measuring, modeling and studying how music is shaped by and perceived in architectural space.
Lecture. Time: 11:30am PDT. Location: Zoom main room. Presenter: Jonathan Berger
Suspended Violin
Noah Berrie
This piece is a heavily technologically-mediated audiovisual deconstruction/reconstruction of the violin; a head-to-toe sonic exploration; a re-appraisal of a long-term relationship to the instrument; a microscopically focused appreciation of physical form and aural character. It emerged from last quarter's 222/250C/285 taught by Nando, Patricia, and Constantin.
Musical offering. Time: 11:20am PDT. Location: Zoom main room. Presenter: Noah Berrie (noberrie@stanford.edu). Piece website (can watch high-quality)
Deep Plunder: Transcribing Sample-able Moments
David Braun
We use a deep recurrent neural network to identify which regions of music recordings contain isolated music production samples. We use parallel multiprocessing to generate 50 hours of fake music consisting of randomly chosen library samples and real song excerpts sequenced with various augmentations. The neural network uses supervised learning with an audio spectrogram as input and a one-dimensional labeled region as output. We achieve low error rates on our synthesized train and test datasets but inadequate qualitative results on real music. We believe our network has potential to generalize to real music, but to avoid time-consuming manual labeling, we must research better data generation and augmentation techniques.
https://ccrma.stanford.edu/~braun/ccrma-open-house/2021/
Casual discussion / Q&A. Time: 3:45–5:55pm PDT. Location: Zoom breakout room. Presenter: David Braun (braun@ccrma.stanford.edu)
Improvisations with the Bodyharp: From Choreography to Composition and Back
Doga Cavdir, Ronja Ver (dancer, choreographer)
This piece presents a short excerpt of an ongoing study with a movement-based musical instrument, Bodyharp, exploring the embodied relationship between composition and choreography. The instrument is custom-designed and manufactured to encourage movement expression in music-making. My collaborator, Ronja Ver, composes with the Bodyharp and reflects back to her composition with her movement.
Musical offering. Time: 3:15pm PDT. Location: Zoom main room. Presenter: Doga Cavdir (https://www.dogacavdir.com/)
JackTrip – Unlocking music performance during the lockdowns (a 1-year history)
Chris Chafe
JackTrip is a multi-machine technology which supports bi-directional flows of uncompressed audio over the internet at the lowest possible latency. Developed in the early 2000's, it was used in intercontinental telematic music concerts and a variety of musical experiments using high-speed research networks as the audio medium. Its ability to carry hundreds of channels simultaneously and its lightweight architecture led to a range of applications from IT for concert halls to small embedded systems. The pandemic has ushered in a new phase of development driven by musicians seeking solutions during lockdown. Major improvements have focused on ease of use and the ability to scale across worldwide cloud infrastructure. With orchestral-sized ensembles urgently in need of ways to rehearse on the network and most participants running their systems over commodity connections, this "new reality" runs counter to what's required for ultra-low-latency rhythmic syncronization. Many developers and musical practitioners have joined in the cause of finding adequate solutions.
Poster/Demo. Time: 3:45–5:55pm PDT. Location: Zoom breakout room. Presenter: Chris Chafe (https://chrischafe.net cc@ccrma.stanford.edu)
RTNeural: An Ongoing Quest to Run Neural Networks at Audio Rate
Jatin Chowdhury
The power of Neural Networks opens up a wide range of possibilities for audio processing and synthesis. However, trying to implement real-time neural network inferencing at audio rates using standard neural network libraries is often difficult and underperformant. RTNeural (https://github.com/jatinchowdhury18/RTNeural) is a lightweight, flexible C++ library, with the ability to perform real-time inferencing for pre-trained neural networks. This presentation will discuss the design and performance of RTNeural, as well as give a demonstration of the musical possibilities that the library enables.
Poster/Demo. Time: 3:45–5:55pm. Location: Zoom breakout room and webpage.
Discretization of Analog Filters Near the Nyquist Limit Using Conformal Mappings
Champ Darabundit, Jonathan S. Abel
We propose a new analog filter discretization method, based on conformal mapping, that is useful for discretizing systems with characteristics near or above the Nyquist limit. Our discretization method is based upon novel peaking and shelving conformal maps, and their derivation is presented as well. The resulting discrete time implementation provides a close match to the desired analog response below half the sampling rate. Our proposed method has the properties of being parameterizable, order preserving, and agnostic to the original filter's order or type. This method should be useful for filters with controllable parameters or analog filters that need to be replicated across multiple different sampling rates
Lecture. Time: 1:00pm PDT. Location: Zoom main room. Presenter: Champ Darabundit (https://ccrma.stanford.edu/people/champ-darabundit). Presentation slides
picTyourScore (pandemic edition)
Hassan Estakhrian
picTyourScore (pandemic edition) is an adaptive piece using graphic scorecards where musickers collectively interpret the cards as they appear in real time. The cards are randomly triggered by the "picTyourScore certified operator" who selects from various card types. Since the piece is adaptive, there are infinite outcomes. Three outcomes were recorded in this session. This is outcome #3.
Performed by JACK Quartet - Christopher Otto (violin), Austin Wulliman (violin), John Pickford Richards (viola), Jay Campbell (cello).
Recorded by Christopher Botta on December 16, 2020 at Cary Hall, Dimenna Center, NYC
Musical offering. Time: 2:50pm PDT. Location: Zoom main room.
Neuromusic Lab Recent Research Updates
Takako Fujioka, Vidya Rangasayee, Nolan Lem, Barbara Nerness, Kunwoo Kim, Aditya Chander, Noah Fram, Cara Turnbull, Elena Georgieva, Sebastian James, Matthew Wright
Takako will share the recent research progress in the CCRMA Neuromusic lab, where we are having fun with mind, body, and brain for music! What do we do? We analyze the brain waves in musicians during improvisation, performance gesture, feeling the beats out of a bunch of oscillators, and counting Hemiola metre. It is an informal conversation that serves up a glimpse into music perception and cognitive neuroscience.
Poster/Demo. Time: 3:45–5:15pm PDT. Location: Zoom breakout room. Presenter: Takako Fujioka (https://ccrma.stanford.edu/groups/neuromusiclab/)
Dinosaur He[a]rd!
Fernando Lopez-Lezcano
The "Apple Sauce Modular Mark V" sings, screams, clips, aliases, mostly digital hearts of sound talking to each other through thin analog patch chords. Many of its subsystems are tied together for this show, but not all of them. We will discover that in the land of the Strong, Karplus is King (of many Klangs), we will hear cross-modulated dual oscillators scream, and maybe also mutterings from love sick silicon, and there will be more grains than you can shake a stick at.
Musical offering. Time: 1:20pm PDT. Location: Zoom main room. Presenter: Fernando Lopez-Lezcano (https://ccrma.stanford.edu/~nando/)
Surrounded @ Home
Fernando Lopez-Lezcano
Teaching Music 222 ("Sound in Space") wihtout a space to work in was a challenge. As we lost access to the 3d diffusion spaces at The Knoll (the Listening Room and the Stage), and the usual concerts were not possible, binaural sound over headphones seemed like the only option. The class actually experimented with an alternative solution in the form of a very cheap multichannel "kit" that students received for the class. This enabled them to experience real speaker-based surround at home, and actually introduced stuff that would not have part of the class if we had all just used our ready-to-go studios. I will talk about solutions found, software developed, and how this might change teaching even when the pandemic is just a bad memory. Everything is available for anyone to see and use in a GIT repository.
Lecture. Time: 2:00pm PDT. Location: Zoom main room. Presenter: Fernando Lopez-Lezcano (https://ccrma.stanford.edu/~nando/)
Surrounded @ Home
Fernando Lopez-Lezcano
Teaching Music 222 ("Sound in Space") without a space to work in was a challenge. As we lost access to the 3d diffusion spaces at The Knoll (the Listening Room and the Stage), and the usual concerts were not possible, binaural sound over headphones seemed like the only option. The class actually experimented with an alternative solution in the form of a very cheap multichannel "kit" that students received for the class. This enabled them to experience real speaker-based surround at home, and actually introduced stuff that would not have part of the class if we had all just used our ready-to-go studios. I will talk about solutions found, software developed, and how this might change teaching even when the pandemic is just a bad memory. Everything is available for anyone to see and use in a GIT repository.
Poster/Demo. Time: 3:45–5:55pm PDT. Location: Zoom breakout room. Presenter: Fernando Lopez-Lezcano (https://ccrma.stanford.edu/~nando/)
The life and adventures of the Apple Sauce Modular Mark V synthesizer, or, Modules I Have Loved
Fernando Lopez-Lezcano
The Apple Sauce Modular was born around May 2020, out of pandemic angst and the need to spend funds donated over many years by Mark/Joan Applebaum. It is an Eurorack format modular synthesizer that grew in five stages (so far), thus the "Mark V" moniker. I can talk about how and why it grew, quirks and features of different modules, what I have loved most, and effective sub-workflows that have emerged from almost one year of use. The Apple Sauce Modular (an early version) has been featured in my piece "Wings", and has made noises in all the recent "Quarantine Sessions", and is getting ready for an upcoming long form live solo piece. Right now it spans four 104HP 3U racks plus one Intellijel 104HP 1U rack (33 modules overall). It talks to two computers and interacts with realtime looping and spatial processing software written in SuperCollider.
Poster/Demo. Time: 3:45–5:55pm PDT. Location: Zoom breakout room. Presenter: Fernando Lopez-Lezcano (https://ccrma.stanford.edu/~nando/)
Audio Workflows for the Quarantine Sessions
Fernando Lopez-Lezcano
The pandemic holds us apart, but the network keeps us together. The Quarantine Sessions, a weekly jam seesion and networked concert, grew out of the need to overcome the musical isolation that the COVID virus bestowed us. A group of six core performers is joined by invited guests every week, with no geographical boundaries. To date we have performed 46 concerts. The audio workflow relies on customized open source free software tools, including Jacktrip patched to support the unique graph of audio applications we use, a binaural Ambisonics decoder, a realtime mastering application written by one of our core members (Klaus Sheuerman) and a SuperCollider program I wrote that does the 3D spatialization, mixing and some processing (including a 16 track asynchronous looper). We will describe how the workflow evolved over time, what we have been using recently, and talk about future improvements.
Poster/Demo. Time: 3:45–5:55pm PDT. Location: Zoom breakout room. Presenter: Fernando Lopez-Lezcano (https://ccrma.stanford.edu/~nando/)
Breaking B.R.A.D
Lloyd May
A hyper-text audio game exploring possible musical futures spawned by internet culture and over-powered AIs.
10-15 minutes long.
Currently in Alpha release.
https://ccrma.stanford.edu/~lloyd/Breaking_BRAD_Alpha/Breaking_BRAD_Alph...
Single-Player Game. Presenter: Lloyd May (https://ccrma.stanford.edu/~lloyd/)
Convergence
Douglas McCausland
Convergence is a work composed for augmented double-bass and electronics performer in third-order ambisonics, which explores both interactivity and agency between acoustic / electronic elements, and the mediation of gesture and musical materials in three-dimensional space. This work employs a complex performance system which allows for an intricate and nimble musical conversation to occur between performers. In working with Aleksander, we developed a set of expectations and rules which governed the performance, and which allowed for occasionally subtle, and sometimes pronounced shifts in our musical roles. Convergence also makes use of an audio score paired with visual cues, which is visible to the performers on a small screen nearby. Ultimately, the chaotic nature of the work gives both performers agency to explore the sonic and performative extremes of this complex system, as well as the liminal spaces which exist in-between.
All of these ideas collide in a densely chaotic and gestural work which encourages both performers to push their respective limits. Convergence is the second piece in a small collection of works developed for five-string double-bass and ambisonic electronics, in collaboration with bassist Aleksander Gabryś.
Musical offering. Time: 2:20pm PDT. Location: Zoom main room. Presenter: Douglas McCausland (https://www.douglas-mccausland.net/ & domccau@ccrma.stanford.edu)
The FAST Project: Fast Audio Signal-processing Technologies on FPGA
Romain Michon
In this presentation, we will give a quick overview of the FAST project. FAST aims at facilitating the programming of FPGA platforms for real-time audio signal processing and exploring applications enabled by this kind of technology (e.g., active control, extended computational capabilities, etc.).
Lecture. Time: 11:00am PDT. Location: Zoom main room. Presenter: Romain Michon (https://fast.grame.fr, michon@grame.fr), lecture slides
Interactive Audiovisual Media on the Web
Mike Mulshine
Recent web framework development efforts (Web Audio, p5.js, WebChucK, tracking.js, ar.js, and more) have rapidly expanded the horizons for creating unique, interactive, audiovisual, and highly accessible web experiences. My work demonstrates a commitment to a few guiding goals/principles: 1) make experiences that engage the user and make them feel some type of ownership or partnership in the experience, 2) make experiences that balance passivity and interaction, 3) make experiences that call more attention to the user's surroundings, environments, people, themselves than they do to the technology, and 4) make experiences that work on the web so that they are easily accessible (most people have access to a smart phone and/or computer and don't need too much technical know-how to access a website). I will demo and encourage the exploration of a few recent projects guided (some more, some less) by these goals/principles.
Poster/Demo. Time: 3:45–5:55pm PDT. Location: Zoom breakout room. Presenter: Mike Mulshine (https://www.mikemulshine.com/ )
Sunny Day (Dream)
Mike Mulshine
Sunny Day (Dream) is a hybrid interactive audiovisual and fixed media composition. I explore the world of a previous work MP5 and use it to frame fragments of a music video for my song "Sunny Day".
Musical offering. Time: 3:40pm PDT. Location: Zoom main room. Presenter: Mike Mulshine (https://www.mikemulshine.com/ )
Graphic design driven JUCE user interfaces
Nick Porcaro, Gregory Pat Scandalis, Julius Smith
JUCE provides a FlexBox class for creating responsive user interfaces, but using it directly is still time consuming. In order to mitigate some of this complexity, we have developed a system for designing user interface layouts for audio processors based on PluginGuiMagic, along with an XML file describing the parameters of the processor. A demonstration of how to add a new Faust based effect to GeoShred will be given.
Lecture. Time: 2:55pm PDT. Location: Zoom main room.
MIDI 2.0, what it is, and what it means for instrument creators
Gregory Pat Scandalis, Nick Porcaro, Julius Smith
MIDI 2.0, what it is, and what it means for instrument creators. The core MIDI 2.0 specs where released a year ago. There is still much work to be done. MIDI 2.0 is compatible with MIDI 1.0 but extends the fundamental communication paradigm between controllers and receivers to be a bi-directional dialog with negotiation. This means that controllers and receivers will be able to exchange information about about how messages are interpreted. This talk will provide an overview of MIDI 2.0, with a focus on what this means for instrument creators.
Lecture. Time: 2:30pm. Location: Zoom main room. Presenter: Pat Scandalis (gps@ccrma.stanford.edu). Presentation slides
MUSIC 101 Listening Rooms
Stephanie Sherriff (Lecturer) + MUSIC 101 Students (W20, SP20, F20, W21)
MUSIC 101: Introduction to Creating Electronic Sounds surveys basic concepts and techniques used to produce electronic sounds while examining historical and social evolution within music production. Creative assignments build upon technical and conceptual skills through practical application. During this process, students are encouraged to explore their own creative voice through music composition and sound design while integrating their own life experiences, imaginations, and musical preferences into the work they create.
During the CCRMA World Update, MUSIC 101 Listening Rooms will feature a curated list of project-based audio works beginning in Winter 2020, most of which have been produced remotely due to the pandemic. The listening rooms are organized into separate virtual listening spaces (breakout rooms) that can be navigated freely at will, much like an installation. The work you will hear reflects the complexity of 2020; issues of loss, isolation, identity, social injustice, climate change, and global and political unrest.
Listening Rooms
MUSIC 101: Sonic Terrain
MUSIC 101: Voice
MUSIC 101: Beats with Daily Sounds
MUSIC 101: Samples, Sampling, Samples
MUSIC 101: Dealer's Choice
MUSIC 101: 4 Unit EPs
NOTE** Please consider participating silently with your camera off as you navigate between listening rooms. Headphones/amplification recommended.
Additional project information and links can be found here.
Installation. Time: 10:00am–5:55pm PDT. Location: multiple Zoom breakout rooms. Contact: Stephanie Sherriff (https://ccrma.stanford.edu/wp/101/ccrma-world-update/ stephanie@ccrma.stanford.edu)
Experiments in Modal and GPU real-time audio plugins
Travis Skare
GPU-accelerated cymbal modal synthesizer, cymbal-response reverberator, waveguide mesh plate with nonlinearities via reflectance.
Poster/Demo. Time: 3:45–5:55pm PDT. Location: Zoom breakout room and website. Presenter: Travis Skare (travissk@ccrma.stanford.edu)
Performable Virtual Musical Instruments
Julius Smith, Nick Porcaro, Jordan Rudess, Pat Scandalis
New hardware and software are ever expanding the power and scope of what we can do in real time with virtual musical instruments. This poster/demo will present a summary of best known practices today in the category of multitouch interfaces + MIDI/OSC controllers (but please contribute your own thoughts as well!).
The gold standard is always custom hardware and software, such as a LinnStrument connected to a powerful synthesis engine, but amazing results can be had as well on a general purpose iPad. In most such systems, there are common architectural features that are worth keeping. Recent enabling software developments will be summarized.
Poster/Demo. Time: 3:45–5:55pm PDT. Location: Zoom breakout room and webpage. Presenter: Julius Smith (http://ccrma.stanford.edu/~jos/)
JOS 2020 Research Summary (or "What We Did Over the Pandemic")
Julius Smith, Jonathan Abel, Chris Chafe, Orchisama Das, Esteban Maestre, Nick Porcaro, Mark Rau, Jordan Rudess, Pat Scandalis, Gary Scavone, Travis Skare, Prateek Verma, JackTrip Developers, Faust Developers
I will attempt to give brief summaries of the following publications along with JackTrip and Faust developments since the last Open House:
JOS, “A spatial sampling approach to wave field synthesis: PBAP and Huygens arrays,” https://ccrma.stanford.edu/~jos/huygens/
JOS, “Audio signal processing in Faust,” https://ccrma.stanford.edu/~jos/aspf/
E. Maestre, G. P. Scavone, and JOS, “Virtual acoustic rendering by state wave synthesis”, in EAA Spatial Audio Sig. Proc. Symp., Paris, France, Sept. 2019, pp. 31–36, and a new JASA paper
M. Rau and JOS, “Measurement and modeling of a resonator guitar”, Proc. Int. Symp. Musical Acoustics (ISMA-19), and “A comparison of nonlinear modal synthesis using a time varying linear approximation and direct computation”, JASA
T. Skare, J. S. Abel, and JOS, “Modal representations for audio deep learning”, AES Conv. 147
O. Das, JOS, and C. Chafe, “Improved real-time monophonic pitch tracking with the extended complex Kalman filter”, J. Audio Eng. Soc, vol. 68, no. 1/2, pp. 78–86, 2020
Lecture. Time: 3:20pm PDT. Location: Zoom main room. Presenter: Julius Smith (http://ccrma.stanford.edu/~jos). Presentation slides
Translations from Painting to Music and Vice Versa Using Neural Networks
Prateek Verma, Constantin Basica, Pamela Davis Kivelson, Ingrid Odlen
Neural Visual Style Transfer, first proposed by Gatys et al., has paved the way for computers in making realistic visual art in a controlled and stylistic manner. The ability to have control over any artistic style and content to create paintings and other art forms has been revolutionary. With pre-training, the algorithm works almost in real time and can thus be used in a wide variety of scenarios. In this paper, we combine advances in machine listening to help us guide the style transfer of visual elements. We learn to associate the elements of visual space that would go along well with the audio of a background music being played. We rank the styles according to a music input. A style is automatically selected from a pre-chosen library according to the ranking to allow fast real time style transfer to be carried out. This gives musicians the ability to alter or in some sense "paint" by playing music.
Lecture. Time: 1:35pm PDT. Location: Zoom main room. Presenters: Prateek Verma (https://ai.stanford.edu/~prateekv/), Constantin Basica (http://www.constantinbasica.com), Pamela Davis Kivelson (https://www.pdkgallery.com)
What is Real? A Report from the CCRMA VR Design Lab
Ge Wang, Jack Atherton, Kunwoo Kim, Kathleen Yuan, Marise van Zyl
How do we make music in Virtual Reality? What new interactions and instruments are possible? How might we make music *together* in VR? What does it mean to design artful tools in the medium? In this report from the CCRMA VR Design Lab, learn about how we are approaching these questions in our recent research projects.
Lecture. Time: 10:30am PDT. Location: Zoom main room. Presenter: Ge Wang (https://artful.design/ https://ccrma.stanford.edu/~ge/)
CCRMA VR Design Lab: Posters & Demos
Ge Wang, Jack Atherton, Kunwoo Kim, Kathleen Yuan, Marise van Zyl
In this poster/demo session, the CCRMA VR Design Lab share a few projects-in-progress in a fun, conversational setting. Several of these projects represent our early efforts in building social tools and experiences for creative expression in virtual environments.
Poster/Demo. Time: 3:45–5:55pm PDT. Location: Zoom breakout room. Presenter: Ge Wang (https://artful.design/ https://ccrma.stanford.edu/~ge/)
The Command-line Is Your Friend: one musician's perspective on programs, pipes, text files, scripting, web scraping, reformatting, automating, and not having to write HTML
Matthew Wright
The "command line" (aka shell, terminal...) is an ancient way of interacting with computers by typing commands and seeing what they print. All the components of this programming style have been mature and widely available for Unix-like operating systems (including the Mac Terminal) for decades and are likely to persist long into the future of computing. "Piping" these commands together connects them in potentially powerful ways; ths creates an "ecosystem" of simple programs like echo, wc, tr, and grep that excel at one task and make sense in context. The programs that process the text are themselves text and can be the output of other programs. Writing raw html is painful and in 2021 everbody should be able to use markdown instead. I will show a variety of recent projects along these lines (mainly bash scripts), including commands to launch SLOrk pieces, the mechanism behind CCRMA's new https://ccrma.stanford.edu/docs documentation system, and easy recipes for converting .md files to .html. Possible advanced topics include installing pandoc via homebrew, web scaping with wget, and the mechanism that generates the html for https://ccrma.stanford.edu/ccrma-world-update
https://ccrma.stanford.edu/docs/common/command-line.html
Demo: mainly screen-share, browsing among all these projects as requested, showing code and results, breaking down techniques at any level of detail and covering what people find interesting. Time: 3:45–5:55pm PDT. Location: Zoom breakout room and webpage. Presenter: Matt Wright (https://ccrma.stanford.edu/~matt)
eleison III.
J.Zhu, Issei Herr, Chong Gu
The third iteration of a years long collaboration with cellist Issei Herr, eleison III is a conversation between the visual and aural languages. Twelve glyphs (and twelve points of view) are sonified four times, each subsequent utterance longer than the last, allowing the strophes to bloom. A digital release allowed us to compose layers of spatial sound and video, resulting in a new paradigm for co-creation.
Musical offering. Time: 11:50am PDT. Presenter: Julie Zhu (zhujulie@stanford.edu & http://juliezhu.net)
In the Death of My Homes I Will Speak
Vaim
IN THE SAFETY OF DISSOLUTION I COME ALIVE
OLEN VABA MU MAHA JÄETUD MAAL
IN THE WEIGHT OF MY ROCKS I AM FREE
JA MA TÄNAN JA LEINAN TEID KA
IN THE DEATH OF MY HOMES I WILL SPEAK
JA KUI MA MEENUTAN SIIS MA MÖIRGAN TAAS
Musical offering. Time: 1:55pm PDT. Location: Zoom main room. Presenter: Vaim (https://www.vaim.net)