Chris Carlson - Music 220c Project Documentation/Blog


June 5, 2011: Final Performance Video

Here is some footage from the final performance on May 26. This performance uses a custom looper, sample player, and filtered feedback delay spatializer built in ChucK with 8 channels of output. Prerecorded samples from the library (first and middle sections of the piece) are mixed and panned. Live input is captured from an Akai reel to reel deck through a pro co Rat distortion and a Mackie mixer, processed through the filtered feedback delay spatializer, and sampled/looped. All looping/sampling is controlled via a Korg nanoKontrol.


May 4, 2011: Program Notes for the May 26 Concert

Volumes of Voices

This piece is derived from a collaboration with Carlin Wragg, a student at the NYU Interactive Telecommunications Program, and the New York Public Library. Together, we have been building a narrative sound tour that exposes library visitors to the history and architecture of each room, rare texts housed in the collections, and, perhaps most importantly, the rich world of sound that lives within the walls of the building. The piece being performed tonight is an abstraction of this work, weaving source recordings such as the heavy pages of the Gutenberg Bible being turned and soft footsteps winding up the staircase in Astor Hall into an evolving tapestry of noise.


May 2, 2011: Rink Abrasion


A few months ago, my friend Quinn Collins asked me to put together a 5-10 minute video piece for a contemporary music concert he was organizing at a local record store near my hometown in PA. I showed the final product in class today. It features stop motion animation by my friend Ben Nicholson and excerpts from several of my recent compositions. The title is a reference to a memory of a birthday party gone sour at Magic River Skateland, a roller skating rink (that has since been converted to a furniture store...) near my hometown of Danville, PA.


May 2, 2011: Volumes of Voices: Final Drafts for Sound + the City

The response to our latest version at NYU last Tuesday was quite postive compared to the past few weeks' reception. The only new suggestion was that the low frequency content of the narrator's voice was slightly absent. I originally rolled it off (a little too much perhaps) to reduce some of the low frequency plosive noises, but brought it up again slightly for this week's revision. When listening to the recording again in Studio E at CCRMA, I discovered a very present 60Hz hum + harmonics and hiss that fades in and out with the vocal track. I experimented with notch filter and multiband compression/expansion and was able to reduce the unwanted noises quite a bit.

Carlin's final prototype/concept presentation is today and will feature guest critics/sound designers who will provide feedback on our work. The overall trajectory of the final piece will move through many different audio spaces as the listener navigates physical space. we wanted to have a new sound example that demonstrates the contrast between different movements. The first movement (linked in my previous posts) takes place in Astor Hall, a large reverberant space that is the central hub of the Library. That portion of the piece is intended to reflect the grandeur of the space. This week's recording will be played when the listener is situated on a stair landing above and slightly away from the larger hall. The ambience is quieter and less reverberant, which is reflected in the recording. The content of this segment provides a brief diversion from the main narrative in the form of a sound poem. Sampled readings of various classics of Western literature are mixed with flute whistle tones and granular bursts.

The sound poem is available here

Here is the latest version of the first movement (with improved equalization of the narration track).

astor.jpg;

April 25, 2011: Volumes of Voices: First Movement

The demo recordings that I posted last week were demoed to Carlin's class at NYU, and we received some very good constructive feedback. One common remark was that the chime sounds conjured up too much of a science fiction theme. I had already been feeling unsatisfied with the tonal content in the first draft, so it is good to know that my doubts were on target. The second comment was that the narrator's voice was buried too far in the mix, becoming lost in the ambient sounds. Earlier this week, Carlin and I made a conscious decision to lower the vocals - primarily in the interest of trying to make this recording more of a sound experience and less of an audio tour. However, this week we are going to try to take it in a different direction.

For the new iteration, we decided to focus on improving the vocal recording and redoing the musical arrangement. Carlin found a voice actor who was able to provide an excellent reading of the script. I spent time with my fellow master's student, Hunter McCurry, recording some violin and piano takes (AKG C-414 microphone). While composing and mixing this week, I focused specifically on creating a sense of space between the underlying sounds and the narration, while still allowing the sounds to interact and form a cohesive whole

Here is the latest version


April 19, 2011: Volumes of Voices/NYU ITP Collaboration

As it turns out, I will be pursuing two separate projects this quarter. In addition to continuing to expand my live looping/processing system, I will be collaborating with Carlin Wragg from NYU's Interactive Telecommunications Program (ITP) on a project called Volumes of Voices. Together, we will be developing a hybrid sound poem/narrative tour inspired by the New York Public Library. The end result will be a mobile app that will allow users to stream our composition while viewing photos and information about the library's interesting artifacts, rare texts, and beautiful architecture. Carlin has primarily been writing a script, gathering sounds from the library, and recording the narrative. I have been working on building soundscapes that will surround the narrative and developing musical phrases for the various movements within the piece.

Here are a few initial tests:

Carlin's project documentation can be found here


April 6, 2011: Project Overview/220b Final Documentation

My research this quarter will involve continuing to develop and expand the live performance system that I built for Music 220b. I hope to investigate and impelement additional custom effects and control mappings. Ultimately, I will develop a new piece to be performed at the end of the quarter.

The current state of the project incorporates sampling, midi-controlled loop manipulation, and multiband spatialization. A single channel of audio input is processed through a looping engine and a custom 8 channel filtered feedback delay spatialization effect. A Korg Nanokontrol midi controller is tightly coupled to to the looping engine, allowing the performer to trigger recordings of up to seven unique loops, manipulate playback rates and loop directions, and adjust output levels. A collection of prerecorded found sounds are also available. Each of these dry recordings may be panned incrementally around the sound field. Signals sent to the multiband spatialization effect are passed through a bank of bandpass filters, separating out low, mid, and high frequency ranges. These filtered inputs are then processed separately through unique filtered feedback delay lines for each of the eight output channels. The outputs of each delay line are randomly cross-faded with each other, resulting in echoes and resonances that glide through the eight channel sound field in real time.

The following audio file is an improvisation using a 6 channel version of the system collapsed to two channels.

220B Final Project Test Render by cloudveins

This video contains my final presentation of the system at the end of the Winter quarter.



Here is a demo video with an overview of the controls and a brief performance example:



Download the full source distribution here


A few notes regarding installation and execution: The looping functionality is very tightly mapped to the Korg nanokontrol midi controller. I have included the "sceneset" (all midi control mappings) which can be downloaded to your nanokontrol (file: 220_final_nanoKontrol_sceneset.nktrl_set). Please make sure to install this - otherwise you will need to look at midiControl.ck and modify the midi mappings (msg.data2 = control change number) for each CC# to match your controller. Also - in order for the code to successfully run, you must modify the paths near the top of the following files: Noise.ck and LaunchPad.ck to point to the source directory and the sounds subdirectory. Run chuck --probe in terminal to determine the device number of your nanokontrol. Modify line 53 of midiControl.ck to use the appropriate device ID number. If the nanokontrol is successfully opened, you will see a printout in the ChucK console window (or in terminal) telling you that the nanoKontrol was selected as the midi device. To run the instrument, open LaunchPad.ck and add it to the virtual machine (or run LaunchPad from terminal). You should see all shreds launch in the console monitor. Do not be alarmed when you see "Stabilizing: #" in the command prompt as you are playing. This message simply indicates that a filter is being turned off temporarily due to the output exceeding a threshold. This monitoring system allows us to safely approach the edge of "dangerous resonances." It is a feature, not a bug :D

The following controls are available for the live looper


TRANSPORT: SCENE 1 SCENE 2