Music 220C
Research Seminar in
Computer-Generated Music
Linden Melvin
The Idea
My plan is to create
a series of sound manipulators (in the form of ChucK patches that trigger MAUI
interfaces) that will be used to alter the sounds produced by performers in an
interview scenario.
Inspiration
Much of the
inspiration for this project came from my final project for 220B that focused
on manipulating sound samples to create a soundscape. The most important
parameter of the performance is that the sounds are produced and incorporated
into the soundscape organically, i.e., all the sounds the audience hears in the
music are a taken from the sounds they have already heard from the performers.
A
few examples
Here are some sample WAV files of what types of
sounds I want to produce in the interview. These clips are taken from audience
members invited up on stage and asked a series of questions. In turn, these
clips were used to create a soundscape, produced entirely from these sounds.
New Direction
I have been working
toward developing sound manipulating ChucK patches that will turn the computer
into a mixing board. My idea, at this point, is to create several (in the realm
of 4 or 5) patches that all control different parameters of the sound. Some of
the patches will have ability to record samples of the live sound; others will
have the ability to manipulate the live sound. In this way, all of the
computers will alter some aspect of the sound contributing to the soundscape.
Performers will produce
all the sounds on stage in an interview scenario. The sounds produced on stage
will be heard over the soundscape being produced, but I hope that it will
challenge the live sound.
More Change
I realized that in
addition to manipulation or live sound, I also wanted to create a means of
manipulating sounds that were recorded and played back whenever the performer
(myself) wanted. In order to get this process started, I met with Michael about
how to set up a system that will allow me to record and store multiple samples.
We came up with a system that tracks the sound coming through the ADC sample by
sample and chucks those samples into an array that we can then pull from later.
We also made it so this array can read and write samples at different sample
rates to create some interesting effects. I plan on using this code in
conjunction with the Sound Control code to develop a soundscape that will
eventually overpower the people creating the sound. Here is an example of the
Recording Machine I came up with.
Speaking People
Originally, I was
considering the option of having people speak into the microphone randomly
throughout the performance. I quickly realized that it probably would not be in
my best interest to have people in charge of making the sounds whenever they
wanted for my entire piece. Instead, I brought it up with the class and they
thought it might be a good idea to bring in actors. As luck would have it, I am
in a play in the Drama Department right now and the cast was more than willing
to come and partake. The more I thought about it, however, the more I realized
that having actors, even reading a script, takes away from my core ideal of
having the entire piece organic. I ditched that idea and started thinking of
new ways to create sound. I went back to an idea I had in 220b about dreams. I
thought of creating a piece is which everyone said a dream they had and I would
manipulate the sounds so dramatically that they would be incomprehensible. The
more I thought about it, the more I considered how interesting it would be to
simply have the speakers standing in front of the audience speaking about their
dreams. In order to get them to talk about their dreams, I needed to set up an
interview situation. And hence, the interview idea was born. I have been
working out questions for the interview, but the most important part of the
performance is that there will be an interview happening and the sounds created
in the interview will eventually take over as they are manipulated and reused.
More Manipulation
I have been working on a
way to make even more sound manipulation in my project and I think I have come
up with something interesting. I have made a system of granular synthesis that
will tear a recorded sound sample into little pieces and allow me to play it
back however I see fit. I think it will be interesting to throw in this added
layer of complexity near the climax of the piece and figure out how to make the
ADC code and this code work together. This patch will add more musicality to
the piece and will also add a layer of interest from a technical standpoint.
Here is an example of the code I have been using.
An Interview
Today in class, the idea
of an interview came up. Instead of using a scene from a play or writing
something scripted, the idea of using interview questions will open up
opportunities for chance and randomness to enter into the project. This way,
the performance will be different every time and will be created from different
audio samples. Also, since it will be an interview, it will not require the
performers to memorize anything meaning it can be done anytime, anywhere.
Part of the concept of
the performance is using different sound sources (SLOrk speaker arrays) to
spatialize sound. The speaker arrays will be set up all throughout the audience
and will be connected to different computers. The different computers will be
running different programs that will manipulate the sound in different ways.
Speaker Array.
Frustration.
I decided to use the
SLOrk speaker arrays to play ChucK files that will manipulate the sounds being
sampled from the interview. The problem I continue to run into, however, it
making the audio play through all of the speaker arrays from one source. The
different speakers have to be wired together using quarter inch cables, but I
continue to find that the outputs and inputs are not lining up. I have to go
through and make sure that all of the dac.chan expressions are correct in order
to make this all work.
The microphone is another
problem I am facing. For a call run through, I used a cardioid microphone set
up facing away from the speakers. I am finding that there are a lot of feedback
problems since I am planning on turning the levels of the speakers up very
high. Chris Chafe told me that he will be ordering some very selective
directional microphones that I will probably be able to use for my
presentation. I am waiting on those and hoping for the best.
The Final Code
Now that the concert is
so close, I have finalized the code so that it is ready for performance. I have
decided to limit the number of speaker arrays to three since there were so many
problems chaining them together.
The first speaker array
will be set up in the center of the audience and will primarily function to
manipulate the sound coming directly out of the microphones. This patch will
allow me to control when the adc is turned on. It will also allow me to add
delay, set the rate of decay, add reverb, and add a LPF to the sound. All of
these will be manipulated directly out of the microphones.
The other two speaker
arrays will be running code that will record clips throughout the performance
and play them back whenever I choose. The code maps the sound recording
properties to different keys on the keyboard. The top row of keys will be in
control of saving samples to different arrays. The second row of keys will play
the respective clips back at half the rate and the third row will play the
samples back at twice the rate. This way, I will be able to use the sound clips
throughout the performance to add color to the soundscape. Every sound that is
made throughout the performance will be created from the sounds being made during
the interview.
Linden Melvin, a biography
Linden Melvin is a junior dual majoring in composition
and music, science, and technology. He is a member of the Stanford Chamber
Chorale and was recently part of the Drama Department production of RENT. His
dream in life is to live at CCRMA and forever be a part of that magic that goes
on here.
Converse Station program notes
This piece is an extension of a piece I developed in
220b (Converse) that
explored the human ability to comprehend speech out of context. Converse
Station focuses on the remarkable
power of human memory and the selectiveness we all have toward aural stimuli.
The sounds may not be discernable, the speech may not be understandable, but
everything heard in this performance is an organic product of the most powerful
instrument we have; the human voice. The aesthetic of one performer doing all
of the work in the piece is vitally important and reflects the complexity of
sound production that we often take for granted. Listen carefully and enjoy.
THE FINAL PRODUCT
Here is a link to the final product of the piece that
was premiered at the 220C final concert:
Next Steps
I am overall extremely satisfied with how Converse
Station turned out. I think there are a few things that can be added to make
the project even better and I might consider exploring some of these in 220D
and beyond. First of all, I think I need more effects or more stations that can
achieve different effects. The sounds I produced were interesting, but I am not
sure if they were interesting for the entire performance. Next, I would focus
on spatializing the sound more. Perhaps I could place speaker arrays throughout
the audience so that they feel like the sound is coming from all around them.
Next I would incorporate more feedback into the performance. For this
performance, I did not want anything to get out of control so I kept the levels
down to a minimum, but I think in later performances of the piece I would like
to include feedback in the sound. Lastly, I think it will be crucial to have
trained improvisers. Michael and Stephen did a FANTASTIC job (thank you again,
guys!) but there were moments when the conversation was so sparse it was
difficult to pull samples. Also, it might be a good idea to have the
interviewers in a separate room, maybe projected onto a screen on the stage.
But then again, that would make feedback impossible.
I think there are a lot of things that can be improved
about this project, but overall, I am very happy with how if turned out and
excited to work on it more in the future.