So I pretty much went crazy and spatialized and added things (tweaked things here and
there, made sure everything had the kind of timing that I wanted).
I had to do a kind of ghetto spatialization because I don't have any
plugins and I couldn't get Ardour to work even though I spent $2 on
it. Oh well. I did come up with a good idea though. I spatialized
to four channels, so I basically just needed two stereo files: one
stereo file for channels 1 and 2, and the other stereo file for
channels 3 and 4. Then I could mess with the levels to get some
circular sound. I also tried out this technique of filtering out
certain frequencies on one channel and filtering out the other
frequencies on another channel to get a weird effect. I think I
might work on that some more in the future. Anyway, after taking
about 3 to 4 days to spatialize it to four channels, Locky sent me
an 8 channel version of this part of the piece. Right. So I doubled
up my mono files to get 8 channels out of 4 and actually liked the
way it sounded. Tweaked things again and then played it for the
Spring concert yay. Here's a stereo version of the 8 channel
version. It's definitely strange to hear it like this, but I used
some weird impulse response stuff that takes 4 channels to 2
channels. I had to smush some of the 8 channels to get the 4
channel stuff but oh well, I think it captures (maybe) most of
it.....
untitled.wav
SPIRIT
Continued working on the piece. I didn't get as far as I wanted to,
but I did take some suggestions to heart and changed some stuff up.
Here's the file for now:
thirdgo.wav
I'm currently not satisfied with this ending and I also need to
morph it into whatever Locky does. We're planning to meet up this
weekend to mold it together and to spatialize.
~~$*%!!! UPDATE!! *%^^!^~~~
So I've worked on this more in the past two days. I took each of the
tracks of Locky's piece and cut them up into smaller bits and
processed them through my system. Then I added all of those to the
piece from earlier this week, refactored a lot of things (panning,
volume, equalization and such) and come with what I think is the
first third of the piece. What remains is processing the original
track through Derek's Sourmash system and then morphing that (??)
into the original track. Here's the bit that I've got so far:
secondgoYEAH.wav
I had to mess with my ChucK file again (le sigh), but I wrote down
some notes there, so heres my updated file:
chaosMIDIPan.ck
------------------------------------------------------------
Yello. So I have decided to prerecord the piece for the final
performance. ChucK will freak me out if I actually play live. This
way I also have more control over every single little part of the
piece. In other news...
I've been experimenting a plenty with my system lately. Here's
a trax:
things.wav: this
one is based off of tracks that I have been playing around with that
I found on freesound.org
Locky McLauchlan made a track and sent me the layers so I experimented
with that on my system:
usotest.wav
Last, I recorded myself playing with each instrument/layer separately
and edited/mixed it into one file...this is really, really raw. So
please don't judge me. I will be working on this a lot, but this is
just an idea of what I will be making for the final
project/performance:
firstgo1.wav
And some ChucK files were modified and added to this. Here are some
of the files so that I can keep track of them in the future:
X.ck
chaosMIDIPan.ck
loadrnb.ck
loadrnb2.ck
I have been working on extending parameters and features to my system
and bouncing around composition ideas. I believe I will be working
with my dear colleague and friend, Locky, on my final composition
for the Spring concert.
I recorded a couple snippits of me playing with the fmchaos and me
layering some sounds. Here's some audio samples of that:
fm.wav
things.wav
I also added some duration and panning controls to my system along
with a limiter to make sure that the sounds don't get too out of
hand. When we last saw macro.ck, I don't believe it was connected to
any controls. I added MIDI controls to it so that I can easily
control the paramters.
Additionally, I created some audio samples using the BandedWG
instrument in chuck and pass those through the chaotic system. Here's some of the chuck files:
bwg.ck
chaosMIDIPan.ck
X.ck
macro.ck
loadbwg.ck
Note that even if these file names are the same or similar to last
week, the files HAVE been edited.
And last, here is a recording of the BandedWG stuff that I made with
all of this:
bgschaos.wav
Here are the original files that I used for the above mixture:
bwg0.wav
bwg1.wav
bwg2.wav
bwg3.wav
bwg4.wav
bwgs.wav
Locky and I talked about some ideas for the composition. It will
involve a rhythmic component as well as atmospheric sounds (most
likely made with my chaos machine). We will also probably process
the daylights out of a canonical sounding r&b piece and then reverse
this processing back to the point where it just sounds like the
original stereotypical r&b piece. The processing can be done
partially with the chaotic system that I wrote, and partially with
other programs that we both know how to use. This collaboration is
still in its infancy, but we will be working on this throughout the
rest of the semester (or until we have to present it at the spring concert).
I borrowed a MIDI controller from the closet next to the mailboxes in
CCRMA. YAY!! It's an AKAI MPD26 and has pads (with aftertouch),
sliders, and knobs. There's more settings and modes that I need to
explore, but I've got some basics working.
I use the pads to trigger each sound file shred. This will
select/deselect the sound file. If the file is selected, I can use a slider to control
the volume of that sound file and I can use different knobs to
manipulate the x and r values of the sine map. This is nice because
I can switch from different values of x and r really quickly,
expanding my sound palate. Here's some code for that (it's an
extension of last weeks code):
X.ck: the global class for all of this stuff
chaosMIDI.ck: basic manipulation of
sine map parameters (knobs) with ability to specify a sound file to play
with, the gain associated with the buffer (slider), and a keyboard key to
trigger it (triggered with pad)
It may be nice to finally post an audio snipit! Here's a really short
bit of what this sounds like:
dropitontest.wav
Then I went a little crazy and connected it to the microphone. I
record 5 seconds into the LiSa buffers (into each of the 3 of them),
then I stop recording and we just record what is being played out of
the dac uGen into each of the three LiSa buffers. This is a type of
feedback...I think. Here's some ChucK code for it
audin.ck: same stuff but
with microphone input
And here are some uncompelling audio samples:
adctest1.wav
adctest2.wav
If you remember, I was playing around with some FM cross-modulation a
while ago. I connected this to the MIDI controller and the results
are pretttty different. I think it's because I can now change the
parameters (modulator gain and frequency and carrier gain and
frequency) like crazy. Here's some codes:
fmchaosMIDI.ck
And a sound sample:
fmchaos.wav
I played around with some other things, but they are not yet worth
posting. They will probably come into play as I create my piece. So
probably some more fiddling around with controls, sound
experimentation, and compositioning for the future.
I continued working on implementing things. I'm afraid I forgot to
explain the sine map last week. It is surprisingly not complicated
at all. Our parameters are x and r. We give them some inital
values. di Scipio's paper from before states that x should be
between -pi/2 and pi/2 and that r should go from 0.0 to 4.0. The
basic map is x <- sin(r*x). This means that we update the x
according to the last x. This results in some interesting sounds. When r
is smaller, x oscillates less. When r is larger, x runs the gamut.
I have found, with experimentation, that we still get interesting
sounds when we do not stay within these boundaries.
This week, I added some keys on
the keyboard so that I could manipulate the parameters of the sine
map. This includes increasing and decreasing the x (which is mapped
to buffer position and is also the parameter in the sine map that is
continously updated) and increasing/decreasing r by a little bit, a
medium amount, or a large amount. The r parameter kind of controls
the value of x that will come out next, though indirectly. Smaller
values of r make x oscillate less, while large values of r make x
move around kind of eratically. This type of mapping allows for a
better control of sound and also a wider variety of sound.
Additionally, we only record the sound file once per LiSa buffer.
After this, we take the output of the sound and record it into the
buffer to get a feedback kind of effect. The
follow ChucK files are examples of this.
X.ck: the global class for all of this stuff
chaogain.ck: basic manipulation of
sine map parameters with ability to specify a sound file to play
with, the gain associated with the buffer, and a keyboard key to
trigger it
chaogainbuf.ck: same as above, but
we add an envelope to the buffer sounds so we can kind of fade in
and out of the grains
chaogainbufinterpr.ck: same
as above except we now interpolate between the r paramter values
I mentioned last week that in addition to playing around with the
micro level of sound, I also want to control higher level parameters
at the "macro-level" of sound. I wrote this to go with the stuff
above. It is simply a low pass filter with a changing Q and
frequency. It can be executed with the above code. It also has
some sine map stuff.
macro.ck: manipulate the higher level sounds
Although this is all nice, fine, and dandy, I feel that my control
over the sounds is not quite as good or consistent as it should be. I
think I will move this over to some type of MIDI control. This will
allow me to turn a knob or something to switch a parameter of the
dynamical system and let the value stay there. I had played around
with this using the trackpad, but it's too crazy. I cannot make the
mouse stay at a constant value, so I feel that there is less control.
Hopefully by next week, I will have some type of MIDI -> sine map
parameter mapping. From there, I should be able to explore more
sounds and figure out the kind of piece I want to make from this. Yay.
I'm currently working on implementing some of the things I read about
last week. Just to test some thing outs, I made something with a sine
map where you can manipulate the parameters. Then hooked this up to
this granular synthesis code where we trigger the system with a
short sample. The last thing I played with was two cross-modulated
oscillators that kind of do FM with each other. The code is linked
to below.
sampbysamp2.ck
sampbysamp3.ck
sampbysamp4.ck
fmchaos.ck
I would still like to work some feedback into the system and continue
experimenting and adding controls.
I also found this nice example that someone made. It's not my goal
sound, but I think it does sound interesting.
http://fastheadache.blogspot.com/2010/01/chaotic-synthesis.html
This week, I read some more papers on chaotic sound synthesis and learned
about iterated nonlinear functions to generate sound. Di Scipio offers a pretty nice explanation of this technique his paper
Iterated Nonlinear Functions as a Sound-Generating Engine (link). It is similar to the chaotic sound
synthesis that I wrote about above except that we iteratively apply a
function to itself n times (if we are at time n). Because we apply
the same function again and again, we get a type of self-similarity.
This self-similarity or self-consistency is like the steady-state part of the nonlinear dynamical
systems that I talked about above.
This is like the I am hoping to play around with this for my final project.
Moreover, he talks about playing around with sound at both the micro-
and macro- levels. I like this concept and want to explore it more.
Because I am not the best at summarizing, here's a nice excerpt from Curtis Road's book Microsound on Di
Scipio's piece where he uses this technique:
In his Sound & Fury, Di Scipio performs on a Kyma system (developed
by the Symbolic Sound company), effecting time expansion and frequency
scaling on an ensemble of acoustic instruments by using faders to
control parameters such as time expansion, frequency, grain rate, and
the spacing between grains.
After reading di Scipio's paper, I implemented one of the functions he
talked about. It is an iterative sine map that goes from sounding
like a sine wave to sounding like noise depending on the value of a
parameter he calls 'r.' The sound is unremarkable, but I've
included the code below so that I can refer back to it.
sampbysamp.ck
I like the idea of mapping these functions to granular synthesis
parameters. I will probably map these to some longer snippets of
sound, like I did in my 220b project. I think it would be interesting
to map the position of the sound in the buffer to the iterated state.
This way, when the dynamical system is not in steady-state, it will be
a bit crazy, but when it is in the steady-state, it will oscillate and
change subtly. This means that I would have to find parameters that
go in and out of that steady-state. I will have to explore more with
this idea. Another idea (and plan for the future) is to use some
feedback from the atmosphere. I think this would involve getting some
input from the microphone and feeding it back into the system. This
could involve computing the difference between what I put out and what
I got in and modifying the dynamical system with this information.
For next week, I plan to implement some of these ideas. Another
possible future idea is also using the techniques from Di Scipio's
other paper to make environmental sounds from the nonlinear iterative
functions. I think matching these kinds of sounds with granulized
sounds and synthesized sounds will make for a compelling piece.
Now that I've talked about that for a while, I also looked up some
examples and found some more interesting things.
I found someone's examples of sonifying Chua's chaotic circuits here:
http://jamesnsears.com/2004/12/chuas_oscillator_in_musical_ap.php
The sounds are not moving on their own, but I think that we could
combine them in pretty interesting ways.
This site also explains
EVERYTHING: http://www.stsci.edu/~lbradley/seminar/index.html
The person who worked on that site also has a great paper with
examples in it: http://campbellfoster.ca/iresearchpaper/#top. I
really like the examples here! Will probably take this direction
because though Di Scipio's stuff is cool, he talks about how you don't
have that much control over the sound. That's why he made his
composition interactive. Campbell Foster instead attempts to make it
more controllable by providing interaction with the feedback system.
Here is one more informative link that explains that the
chaotic/nonlinear dynamical systems I talked about during the first
week is actually just the same thing as iterated function systems:
http://scienceblogs.com/goodmath/2007/08/iterated_function_systems_and_1.php
GOAL: My project for this class will focus on some type of chaotic sound
synthesis. I would like to explore different ways of manipulating the
sound based on dynamical systems. Instead of just focusing on
creating timbres with the dynamic systems or just manipulating
higher-level parameters of the sound with the dynamical systems, I
would like to play around with all of the sound using these
systems. Both the micro-level and macro-level creation of sound (as
di Scipio calls them) will be created through either one or more
dynamical system(s). I hope to have a piece to play or perform by the
end of the quarter.
This first week of class I just read some papers and decided on
a wavy version of my project idea, which is chaotic sound
synthesis.
To explain this a bit, a dynamical system is a
mathematical concept where you have a rule and states. At any given
point in time, you have a state and the rule will always tell you
what state comes next given this current state. Given some initial
state, you can iterate through the system to figure out all the
states for times in the future, and this is deterministic depending
on the initial state (meaning there is only one future state per
current state). We can have nonlinear dynamical systems which
sometimes exhibit chaotic behavior which we can think of as
completely unpredictable, even though it is a fundamentally
deterministic system. The interesting part about these systems is
that they can settle down to some steady-state. A variable moving
according to the rules of this type of dynamical system will evolve
over time and move towards a set we call an attractor. For these
nonlinear dynamical systems, we are interested in finding these
attractors because points that get close enough to the attractor
will remain close even if slightly disturbed. Knowing this, we can
play around with different parameters of a nonlinear dynamical
system and make some interesting sounds.
This "nonlinear dynamical
system" sounds kind of loaded, but luckily, there are some simple
linear systems that we can use to achieve this. I coded up two simple
maps in ChucK to play around with these ideas. Some code was written based on old Music
220a code, some code was written based on my old 220b project, and
some code was written from scratch for this. You can click on the
links below for the ChucK code. I do not have any sound files for
these because I am not too happy with the sound but rather just
wanted to make sure I could code this out and make it work.
begin.ck
begin2.ck
begin3.ck
begin4.ck
begin5.ck
whoa.ck
whoa2.ck
whoa3.ck
whoa4.ck
whoa6.ck
whoa7.ck
For the files that sound like little clicks, I set an impulse to 1 at
the rate of the value of the state of the dynamical system. For the
files that use a pre-recorded sound file, the state of the dynamical
system determines the buffer position and the length of time that the
chunk is played for.