Homework 2 - Final Audio Visualizer
Izma Shabbir, October 21, 2024
An October walk in Palo Alto
Video: Visualizer
Screenshots:
Instructions:
None, this visualizer uses a bird recording I made outside my apartment, and then passes over low pass and high pass filters (LPF/HPF) to generate wind on top of the bird calls.
Ideas:
I was really compelled to use recordings from my bird recording app (Merlin Bird ID) -- which takes recordings and identifies the bird call. You can export the files as .wav files, and it felt like a really fun recording to play around with. I then used filters to create different tones of wind sounds ontop the bird calls.
Reflection:
Honestly, this piece felt challenging and frustrating! I feel very far away from the visuals I want to get to, but feel secure that I will continue to grow as an audio/visual designer as the quarter progreses. I actually really liked the aesthetics of my milestone more than my final project, but I used a sound recording of one of my favorite songs on top of my milestone. I felt a bit hindered by my sound making ability.
Difficulties:
I think creating a narrative felt really difficult for me. It was really helpful to play into the idea of birds and nests and to align the visuals to my sounds, but I was feeling pretty lost. I feel a bit more comfortable now, so I'm excited to keep making progress.
Thanks to --
Thank you to Kunwoo for helping me de-bug my code, write code for my filters, and encourage me! Andrew helped me a lot with converting coordinates with my milestone. Mollie Redman helped me a lot with thinking about rotations, and converting coordinates. I used the chuck and chugl api's and examples.
Code
//import test wave file in
//me.dir() + "Downloads/birdwatching.wav" => string birdwatching;
// window size
1024 => int WINDOW_SIZE;
// y position of waveform
2 => float WAVEFORM_Y;
// width of waveform and spectrum display
1 => float DISPLAY_WIDTH;
// waterfall depth
64 => int WATERFALL_DEPTH;
// window title
GWindow.title( "izmaaaa" );
// uncomment to fullscreen
GWindow.fullscreen();
// position camera
//GG.scene().camera().posZ(8.0);
// waveform renderer
//change width to change thickness of waves
GLines waveform --> GG.scene(); waveform.width(1);
// translate up
waveform.posY(0.455);
waveform.posX(0);
// color to colors i likee!!
waveform.color( @(1, 0.764, 0) );
//scale X axis
waveform.scaX(-0.005);
//scale Y axis
waveform.scaY(-0.01);
// make a waterfall
Waterfall waterfall --> GG.scene();
// translate down
//waterfall.posY( SPECTRUM_Y );
// spectrum renderer
GLines spectrum --> GG.scene(); spectrum.width(0.9);
// translate down
spectrum.posY(0.7);
// color0
spectrum.color( @(0.368, 0.360, 0.329) );
//scale x axis
spectrum.scaX(-0.09);
spectrum.scaY(-0.09);
//spectrum.pos(@(-0.2, 0, -16));
//rotate 90 degrees
spectrum.rotateY(90);
waveform.rotateX(180);
// add orbit camera
// Option 2: use the builtin Orbit camera controller
// GOrbitCamera orbit_cam --> GG.scene();
// GG.scene().camera(orbit_cam);
//Camera start
// while (true)
//{
// GG.camera().lookAt(@(0,0,1));
//}
// set gain:
Gain input => dac;
// accumulate samples from mic
// sound buff instead of adc
input => Flip accum => blackhole;
input => PoleZero dcbloke => FFT fft => blackhole;
//use input instead of mic
//adc => Flip accum => blackhole;
// take the FFT
//use input instead of mic:
//adc => PoleZero dcbloke => FFT fft => blackhole;
// set DC blocker
.95 => dcbloke.blockZero;
// set size of flip
WINDOW_SIZE => accum.size;
// set window type and size
Windowing.hann(WINDOW_SIZE) => fft.window;
// set FFT size (will automatically zero pad)
WINDOW_SIZE*2 => fft.size;
// get a reference for our window for visual tapering of the waveform
Windowing.hann(WINDOW_SIZE) @=> float window[];
// sample array
float samples[0];
// FFT response
complex response[0];
vec2 positions[WINDOW_SIZE];
// map audio buffer to 3D positions
fun void map2waveform( float in[], vec2 out[] )
{
if( in.size() != out.size() )
{
<<< "size mismatch in map2waveform()", "" >>>;
return;
}
// mapping to xyz coordinate
DISPLAY_WIDTH => float width;
for (int i; i < in.size(); i++)
{
// TRANSLATING FROM CART TO POLAR
i/50.0*Math.two_pi => float theta;
// this is the index converted to 0-2pi
50 * Math.sqrt( (in[i]$polar).mag ) +5 => float r;
//in[i]$polar).mag this is VALUE in magnitude
r * Math.cos(theta) => out[i].x;
r * Math.sin(theta) => out[i].y;
}
}
// map FFT output to 3D positions
fun void map2spectrum( complex in[], vec2 out[] )
{
if( in.size() != out.size() )
{
<<< "size mismatch in map2spectrum()", "" >>>;
return;
}
// mapping to xyz coordinate
DISPLAY_WIDTH => float width;
for (int i; i < in.size(); i++)
{
// TRANSLATING FROM CART TO POLAR
i/30.0*Math.two_pi => float theta;
// this is the index converted to 0-2pi
50 * Math.sqrt( (in[i]$polar).mag ) +5 => float r;
//in[i]$polar).mag this is VALUE in magnitude
r * Math.cos(theta) => out[i].x;
r * Math.sin(theta) => out[i].y;
}
}
// custom GGen to render waterfall
class Waterfall extends GGen
{
// waterfall playhead
0 => int playhead;
// lines
GLines wfl[WATERFALL_DEPTH];
// color
@(0.258,0.156,0.054) => vec3 color;
// iterate over line GGens
for( GLines w : wfl )
{
// aww yea, connect as a child of this GGen
w --> this;
// line width
w.width(0.065);
// color
w.color( @(1.0, 1, .4) );
w.scaX(0.3);
w.scaY(0.15);
}
// copy
fun void latest( vec2 positions[] )
{
// set into
positions => wfl[playhead].positions;
// advance playhead
playhead++;
// wrap it
WATERFALL_DEPTH %=> playhead;
}
// update
fun void update( float dt )
{
// position
playhead => int pos;
// so good
for( int i; i < wfl.size(); i++ )
{
// start with playhead-1 and go backwards
pos--; if( pos < 0 ) WATERFALL_DEPTH-1 => pos;
// offset Z
wfl[pos].posZ( -i );
// set fade
wfl[pos].color( color * Math.pow(1.0 - (i$float / WATERFALL_DEPTH), 4) );
}
}
}
//input file! & instantiate :D
Noise n => HPF f => JCRev r => input;
Noise ln => LPF lf => r => input;
Noise ln1 => LPF lf1 => r => input;
SndBuf bird => input;
.3 => r.mix;
.1 => f.gain;
0.1 => n.gain;
0.2 => lf.gain;
0.1 => ln1.gain;
0.3 => float HPF_Speed;
10 => float LPF_Speed;
0.6 => float LPF1_Speed;
1 => bird.gain;
"Downloads/birdwatching.wav" => bird.read;
fun void birds ()
{
while( true )
{
//tell the program that I want to start at the 5th second
0 => bird.pos;
bird.length() => now;
}
}
spork ~ birds();
fun void muteNoise()
{
0.0 => n.gain;
0.0 => ln.gain;
0.0 => bird.gain;
{
now + 0.5::second => time later;
while (now < later)
{
if (n.gain() > 0.0)
{
n.gain() - 0.001 => n.gain;
}
else
{
0.0 => n.gain;
}
2::second => now;
}
}
}
spork ~ muteNoise();
fun void playNoise() {
while( true )
{
// sweep the cutoff
Math.sin(now/second*HPF_Speed)*110 => Math.fabs => Std.mtof => f.freq;
Math.sin(now/second*LPF_Speed) * 110 => Math.fabs => Std.mtof => lf.freq;
Math.sin(now/second*LPF1_Speed) * 110 => Math.fabs => Std.mtof => lf1.freq;
// to make fluctuation faster: Math.sin (now/second*2) or slower (*0.5)
// advance time
5::second => now;
}
}
spork ~ playNoise();
/* fun void muteNoise()
{
//0.0 => n.gain;
//0.0 => ln.gain;
{
now + 0.5::second => time later;
while (now < later)
{
if (n.gain() > 0.0)
{
n.gain() - 0.001 => n.gain;
}
else
{
0.0 => n.gain;
}
10000::ms => now;
}
}
} */
fun void unmuteNoise()
{
0.2 => n.gain;
0.2 => ln.gain;
}
//spork ~ muteNoiseFinal();
fun void doAudio() {
while (true )
{
// sound input?
// upchuck to process accum
accum.upchuck();
// get the last window size samples (waveform)
accum.output( samples );
// upchuck to take FFT, get magnitude response
fft.upchuck();
// get spectrum (as complex values)
fft.spectrum( response );
// jump by samples
WINDOW_SIZE::samp/2 => now;
}
}
spork ~ doAudio();
fun void muteNoiseFinal()
{
0.0 => n.gain;
0.0 => ln.gain;
0.0 => bird.gain;
30::second => now;
}
// graphics render loop
while( true )
{
// next graphics frame
GG.nextFrame() => now;
// map to interleaved format
map2waveform( samples, positions );
// set the mesh position
waveform.positions( positions ); // chugl
// map to spectrum display
map2spectrum( response, positions );
// set the mesh position
spectrum.positions( positions ); // chugl
//spectrum rotate
//spectrum.rotateX(Math.pi/2);
//waveform rotate
waveform.rotateX(Math.pi);
// feed audio data / waterfall
waterfall.rotateX(Math.pi/0.1);
//waterfall2.rotateZ(Math.pi/3);
waterfall.latest(positions);
//scene graph
// if (UI.begin("window title")) {
// pass any ggen to show this ggen and all children
// UI.scenegraph(GG.scene());
// }
//UI.end();
}
Milestone 1, Audio Visualizer
Izma Shabbir
October 9, 2024
Audio Visualizer Name: I Triangle James Blake
Description: I wanted to visualize the way I feel when I listen to music by one of my favorite artists, James Blake. While the sound can increase in intensity, it starts off quiet and build over time. I wanted the waterfall to reflect that. I chose a color palette that felt soft but bold, feelings which the song embodies.
Screenshots:
This was my first time back in coding land in a while, and it took me a significant amount of time to get back into the headspace. It was really helpful to talk through what my goals were, and I knew I wanted to land on closed geometric shapes. Getting the support to convert coordinates allowed me to start playing around with number of sides, and thus, different shapes. I really enjoyed getting to experiment with color palettes and dive into changing RGB colors away from the standard colors. I want to add more fluidity and visual interest to this code going forward, but I really enjoyed this first step!
Video: https://www.youtube.com/watch?v=YJ3TSPptVTg
//