220C: Josh Mitchell

 

This is a website tracking my progress as I prepare a long-form sonic experience made of a series of compositions and performances exploring concepts within nonlinear dynamics and chaos theory as artistic tools!

 


 

Week 1:

This week was largely spent researching and writing for a half-lecture, half-sound-collage demonstrating the concepts in nonlinear dynamics and chaos theory that inspire much of my work. It has become quite a challenge giving a clear sonic demonstration of how chaotic oscillations arise in nonlinear-feedback-based instruments because the "edge of chaos" is such a small window in many of these systems, to the point where it can feel like I'm flipping a switch into a new sonic world rather than smoothly entering that new world. Another problem for this piece is latency between my speaking into a mic and the result leaving a set of speakers.

 


 

Week 2:

This week was primarily dedicated to generating audio and visuals for "PCR," a piece that was originally about a trail I love in an open space preseve called Purisima Creek Redwoods. In light of the past few years, however, that acronym has become much more strongly associated with Polymerase Chain Reaction methods for quickly copying DNA, which are the basis of rapid COVID-19 tests. Thus, the piece has shifted meanings to be about this shift in meaning, as well as shifts in what words mean in general!

On the audio end, PCR features recordings of the original guitar line for the piece, which slowly blur into a spatialized cloud of effects-heavy recordings. This cloud gives way to an entirely synthesized instrument playing the same melody, in a slightly unsteady new musical context. Specifically, this new instrument is made by putting a modal model of a vibrating string and a custom nonlinearity in a feedback loop, creating chaotic oscillations which are then convolved with an impulse response of a guitar body. This week, I spent a lot of time fine-tuning this synthesized instrument, and landed on a "metallic-ization" over time by slowly increasing a few parameters of the string model. This slow warping defines the end of the piece, along with an envelope-controlled spatial reverb inspired by this artist's creative use of a gated reverb effect on his guitar:

On the visual end of my project, I gathered about 90 minutes of footage from the trail I mentioned above, and began work on a "granular video synthesizer" meant to concatenate small clips of audio together, with smooth crossfades analagous to the windows on individual grains in granular sound synthesis. This involved a lot of debugging, with nothing to show yet.

 


 

Week 3:

This week was primarily dedicated to generating my "granular" visuals for PCR. I've successfully incorporated a method for choosing a video grain based on the average rgb data of the previous video grain. This leads to an even smoother-seeming transition from one video grain to the next. When the guitar part changes to a different pattern, I've opened up the range of clips my video synthesizer is allowed to choose from, and sped up the crossfading to ramp up the piece's enregy. Towards the very end of the piece, I've layered in a clip from this video synthesizer where the rgb data it looks at has been messed with at the pixel level inside the video synthesizer code.

As the main melody returns to the synthesized instrument at the end of the piece, I've also layered in some data-bent copies of the video synthesizer's output, made by replacing arbitrary bytes in the video file using a program called Hex Fiend, which simply displays and allows editing of any file in a hexadecimal format. I love the results that methods like this provide, but they are very prone to crashing video players or corrupting video files, so getting the data-bent clips to render along with the regular outputs of my video synthesizer involved a lot of trial and error.

A binaural render of the piece is available here:

 


 

Week 4:

This week was mostly spent debugging a MIDI interface to both integrate it into my performance setup and fix the latency issues I mentioned in week 1. After hours of troubleshooting other newer latency issues with the MIDI interface, it turned out that the answer lies in how SuperCollider initializes communication with any external device or software via MIDI for the first time after the server has been booted. Because of this start-up process, a time-buffer of about two seconds is added to all of SuperCollider's outgoing MIDI messages. The quickest method I've found to remove this latency once it is no longer needed is by essentially initializing MIDI communication from SuperCollider twice in a row. While I haven't played much with automating things on this side of SuperCollider using custom classes and methods, I did this week to a good level of success.

 


 

Week 5:

Continuing from last week, I built a fully-functioning performance system for my piece and digital chaotic instrument titled "autoradiograph," using that same MIDI interface. As a quick background, the instrument includes four voices of a saw wave going into a second-order low pass filter. Modelled on old analog synth implementations of a typical Sallen-Key filter, a nonlinearity is introduced at a specific point in each filter. However, this nonlinearity is a chaotic map (It's pretty close to a gaussian map, in this instrument). Each voice periodically determines the driving sawtooth frequency of another, and then they are placed in physical space with various filter and gain parameters being the onlythings I can vary as a performance. This results in a gritty, dark, and seemingly-unpredictable sound which reminds me of the feelings I got when first working with radioactive materials in an undergraduate physics lab.

For the MIDI controller, I had a lot of trouble using 9 faders to control more than 9 parameters, about 5 per synth voice. What I landed on doing was using MIDI note on messages from the controller to set which of the 9 available faders' MIDI cc messages were sent to which of SuperCollider's control rate busses. This effectively lets me press a button to access a different "bank" of faders which are re-labeled by changing backlit 7-segment displays. Because of how SuperCollider seems to be sending MIDI messages back to the controller, storing and recalling settings for each bank of faders was very challenging. This involved writing a lot more custom classes and methods than I'm used to in SuperCollider, but it was great practice and pretty fun overall. Now that this convoluted multi-bank control method works for "autoradiograph," it will hopefully read as more of a performance to an audience than just using a computer.

 


 

Week 6:

I think the most compelling thing about "autoradiograph" sonically is an interplay between chaos and order, and I'm interested in making this "edge-of-chaos" sonification read more clearly for listeners who are less familiar with chaotic and nonlinear dynamics. As the performer on this piece or with a mixer-feedback setup, I can very clearly feel the moments when these jumps between order and chaos happen as I'm turning a knob or moving a fader, but an audience doesn't automatically recieve that tactile feedback, unfortunately.

With that in mind, this week I attempted to gather easily-reproduceable "minimally-chaotic" filter feedback parameters, so that chaos in these pieces that rely on that type of dynamics can be introduced to an audience similarly to how it is in a math class, just with sounds instead of equations. The problem here is in not making a conceptual jump at the same time as a shift from order to chaos, which has turned out to be very non-trivial. Any nonlinearity I've introduced for feedback around a single filter is either not chaotic or feels like too much of a conceptual departure from any previous non-chaotic filter feedback example. I've had some success revisiting Nathan Ho's blog post from a couple years ago on what he calls "Feedback Integrator Networks," replacing the leaky integrators he uses with second-order bandpass filters. A network of at minimum four of these filters seems to be needed for chaotic oscillation using a more familiar tanh-based nonlinearity than the modified-logistic or gaussian maps I typically use. However, I still believe I'm getting more sonically-compelling results from these maps, especially given a spatial audio environment.

On a related note, I've found that the difference between a discrete chaotic map and a numerical solution for a set of continuous differential equations is very substantial in how they can be utilized in SuperCollider SynthDefs, which further informs the direction I must take. SuperCollider feedback is done on the level of audio blocks versus single samples, which has little effect on the sounds from a discrete map but greatly messes with the timing of more complicated numerical methods for modeling continuous chaotic dynamics, which I've found are often required to reproduce chaotic oscillation, and often even affect a simple forward Euler method implementation. While I could reduce the size of SuperCollider's audio blocks down to one sample, this has unpredictable effects elsewhere that I don't think would be smart to introduce in a live setting. This is the reason I've mostly stuck to my modified-logistic and gaussian maps. Other chaotic maps often rely on sine or modulo-wrapped sorts of nonlinearities, which I've found quickly dissolve into white-noise-y timbres without doing much of anything interesing.

 


 

Week 7:

I was out sick this week, unfortunately. While I couldn't quite think super clearly, I wanted to still feel productive, and channeled that towards learning some basic p5.js and Processing coding. I started really enjoying working with code for visuals in ChuGL for Ge's Music 256 course last fall, and I know both p5 and Processing have OSC capabilities, which should make it easy for me to reach a similar workflow with SuperCollider. Both Ge and Andrew highlighted ChuGL as a more animation/interaction-friendly alternative to Processing in that course, due to the way the latter's lack of a default continuity between frames for individual objects. However, I found this limitation very easy to work around, and overall p5 feels very similar to how I personally approached ChuGL, especially with the very explicit p5 documentation that allowed me to easily translate what I learned from ChuGL.

I also did some fever-state brainstorming in bed this week, centered around what I think was my most important takeaway from a nonlinear dynamics class I took a year ago. The professor of that class suggested that this subfield of physics and mathematics is much more about learning what the "right" questions to ask are than learning what the answer to any old question is. Some questions in chaotic dynamics are un-answerable given what any human can possibly know, but the right questions can yield powerful new information. As a concrete example of this, we can't accurately predict the weather on a given day a year from today, let alone 20 years in the future. However, we can show that in all the crazy variance that can happen from one day to the next, all possible trajectories without extreme interference go in a certain direction over a long time frame. "What's the weather on May 20th, 2026?" is a pretty useless question to ask right now, but "What are the long-term, global trends in the weather?" is a pretty important question to ask, especially considering the very alarming results we've gotten so far.

So then, what are the "right" questions to ask about making music with chaotic dynamical systems, if there are any? Personally, I've found it very easy to explore "decaying" or "disintegrating" sonic textures using chaotic systems. I'm drawn to an ephemeral sort of beauty in most chaotic sounds, and the seeming noisiness and sensitive dependence on initial conditions that are inherent in these systems greatly amplify that artistic pull on a conceptual level. This sort of broken-ness is common all over the universe at every scale relative to us. However, is it the ideal state of things? In playing with that question, I've come up with a variable sort of spring-force coupling between particles in a chaotic attractor. While I haven't done any formal analysis of this type of system, if two particles are uncoupled or very weakly coupled and start very close together, their trajectories diverge as expected from the fractal nature of a chaotic attractor. This would constitute a Lyapunov exponent of greater than 0. However, as this coupling force is increased for one of the particles, it will consistently follow close to the chaotic trajectory of the other, resulting in what looks like a Lyapunov exponent of less than or equal to 0, while keeping the beautiful trajectory of that guiding particle. Musically, this can be used as a global control for the "together-ness" of whatever aspect of a piece the performer desires.

To the left is a visual-only example of this concept made with p5.js, depicting three particles in the lorenz attractor. Click anywhere on the animation to restart it. Both the red and green particles start at the exact same location, but the green particle experiences a spring force pulling it towards the blue particle, and the red particle does not. The bars to the left move up and down with the distance between the particles in their respectively-colored gradients.

 


 

Week 8:

Let's pull a 180 on that idea from last week in order to highlight how different it is from my own (and many others') typical approcah to chaotic synthesis. I've been playing around with chaotic feedback on a looped sample for a while, and I believe I finally have a compelling little piece of music based on this concept. I call it "Buffer Decay," for reasons that should become obvious below. First, It's heavily inspired by William Basinski's Disintegration Loops, which were one of my first introductions to ambient and experimental electronic music. My personal favorite of the series is included below for reference. However, I'd encourage everyone to look into the weird history and culture behind all of Basinski's Disintegration Loops, and tape music in general.

The idea with my piece is essentially to garble a looping audio sample using itself as a nonlinear map for chaotic feedback. Because the sample is looping, it can be represented by an infinite sum of sine waves atfrequencies which are multiples of the loop frequency. One common chaotic map is x[n+1] = A*sin(x), which is chaotic for most values of A greater than one. Therefore, the sample itself can be viewed as a sum of infinitely many of these sine maps, and when scaled properly by a global factor, it will produce a chaotic map. What information is that map iterating over, though? There are two "tape heads" running through the sample, to borrow terminology from tape music. At a defined interval, one will get the value of the sample it's currently at in the loop, and shift the phase within the loop of the second head by that amount (scaled by some factor, of course). That second "write" head will replace the sample at that point with what the first "read" head is reading. This will, over time, garble the original loop in a way that is both entirely deterministic and entirely chaotic.

A simple spatialization works very well to highlight the sensitive dependence on initial conditions and deterministic, yet apparently-random trajectories that are inherent in chaotic systems. If multiple copies of the above process with slightly different scaling factors are played panned around a room, it is easy to hear the exponential shift from nearly identical loops to a slow separation that quickly becomes a textural cloud. This highlighting on a longer time-scale will be used in my concert as a setup for the question I asked last week and its resulting conclusion.

 


 

Week 9:

All my work for this week went into concert prep. First, any and all visual elements were controlled by sending OSC commands to a fullscreen Processing window for the CCRMA stage projector, in an effort to make troubleshooting my visuals as easy as possible. Second, any pre-recorded pieces were rendered as ambisonics files I could load directly into a buffer and play from SuperCollider. These preparations allowed me to only have one piece of software on my laptop that I needed to care about during the performance, and I have had plenty of experience debugging SuperCollider on the stage. If you look at my entry for week one, you'll notice a speaking portion with effects, which I opted not to include here. When I rehearsed this speaking portion with no effects for two separate test audiences in the week leading up to the concert, I found that the arc I was taking in explaining concepts didn't lend itself very well to any particular part being garbled as much as I had planned, and I opted to simply give a spoken intro to some of the concepts present in my work, so that that aspect might better shine through for my audience.

However, anyone who was present that evening will know that there were some massive technical difficulties at the beginning of the performance. These threw me off a great deal, and in improvising to catch up with myself I did not deliver an introduction with the quality I had practiced for. Instead, I tried to sprinkle the information I had planned for the start of my concert in between pieces, which I believe had somewhat of an impact, if not as large as I had hoped for. Barring this painful beginning, everything went as well as I could hope for sonically and visually.

Now, what were those errors at the beginning? They were continuations of errors I'd been fighting all day long, and I believe their root cause was in how the many MOTU devices on the stage communicate with each other and an external laptop. The day had started fine, but around noon I started hearing what sounded like mismatched sample rate aliasing on select sounds which had tested fine earlier in the week. We restarted the entire stage several times trying to fix this, but it kept reappearing. My best guess is that these reappearances were due to a device other than my laptop (or perhaps the droid) being set as the master clock for typical 48 kHz audio signals, based on where I have and have not been able to reproduce the issue. After our last system-wide reboot right before the concert, we had tested fine and I invited everyone in to hit go on SuperCollider. Next, there was a horrifying noise and my connection to the droid was dropped. I still have no idea what caused this particular error, and this is what threw me off so bad at the start of the show after I thought I had fixed everything. To my limited hindsight knowledge, I believe this was an issue with the MOTU 8M driver on my laptop that I've never encountered before in all my time using it with the stage.

 


 

Week 10:

This week I've been working on adapting my final piece for the concert into an interactive webpage as a deliverable for Wednesday's presentation. So far, I've learned a framework for getting WebChucK and p5.js to speak nicely with each other. As the original piece relied on all math and sound coming entirely from SuperCollider with a simple Processing script to visualize the math involved, I pretty much already have the visual side of the website done. However, my personal workflows for implementing the differential equations involved in chaotic synthesis in ChucK and SuperCollider are wildly different, so more work will be done in translating that portion of my piece by Wednesday.