March 14th, 2023
Music 356 / CS470, Stanford University
I wekinated my sequencer from 256a and also fixed a bunch of the problems it had under the hood.
I really love the sequencer I made in 256a, it gives some fun constraints. The problem with it though is that it takes a lot of time and clicking to actually do anything with it. The goal here was to make the sequencer much easier and faster to use, and what's an interface method that most people know how to use? Putting their hand in front of the camera! I really wanted to intentionally obfuscate some of the input, so that a user can really explore the sequencer and find capabilities they didn't know existed. In the original Axo Band you can literally see every possible axis of play, and there's none of the wonder and surprise when you find something new. This new input is definitley more approachable for a beginner, you just need to have a hand, but it takes much more play and exploration to regain the precision of the original.
At first I wanted one hand to control the beat and tempo, and another hand to control the instrument and note. This proved a bit unweildy to train and makes keyboard input unfeasible so I decided to use one hand and keyboard input.
Without further ado, AIxo Band!
Use your hand to edit the selected attributes!
Swap Tracks - Press , , , or  to swap to the corresponding track
Pause Editing - Press [space] to toggle editing on and off
Swap Edit Mode - Press [Q] to swap between editing the Beat or the Instrument for the selected track
NOTE: In Vision osc turn off all but hand tracking for better performance
This project taught me an important lesson, just because everything seems like it’s working when tested separately, doesn’t mean it’ll work when connected. I’ve learned this lesson in robotics in a big way, but hubris comes for us all, and I really thought things were working “well enough” but once all those little imperfections get stacked on top of each other the whole tower comes tumbling down. At the end of my sequencer project in 256a there were a bunch of bugs which I knew about, but just avoided in my presentation and figured I’d never have to actually fix them. Fixing some of the bugs were nasty, and overall when reading the file for this project, it’s very clear that a lot of functionality from the original was dropped, in retrospect a part of me wishes I had just started from a blank slate and referred back to the old code, but by the time I had realized this I was in far too deep. The single ChucK script running everything is a stylistic nightmare, but it actually works really well and I’m truly proud of where my project ended up because it’s exactly where I wanted and expected it to be. I wish I had time to fine tune my destiny mosaic more, but I’m still pretty proud of where that ended up. AI has been a really cool tool, and I’m profoundly glad I got to make art with it first before taking 221 next quarter, this class has been an experience like no other, and I truly believe it has heightened my ability to think about all sorts of ethical conundrums, as well and my ability to make goofy fun projects with AI.
The project this Quarter which I'm glad I made is my destiny audio reconstruction project. I had was a really fun time making this project, and I feel like it's the project where stuff about AI was really starting to click. I spent hours trying to get this to sound at all like the gameplay audio to no avail, when suddenly I changed a paramater started working and the joy I had in that moment is what makes me want to share this again.
Click here for downloads or more information
For the final project I want to try to extend my wekinator sequencer to support multiple tracks and more interesting control mechanisms. For the first milestone I tried to implement having multiple tracks working at once, but as you can probably hear, it get's weird a garbled after a little while and I couldn't figure out how to fix that yet. A first for the multiple tracks I tried to increase the number of wekinator output channels, but I couldn't figure out how to make it to that you could control each channel individually. I ultimately decided to cut my losses there and instead implement the multitracking in the sequencer code itself, instead of trying to make independent wekinator channels. I plan on adding hand tracking for frequency control by the end of the project, and making it so the beats are better quantized. Here's what I've got so far:
For my existing project I want to use my mosaic Destiny program, but I plan on making some tweaks to the features and their weights, and recording a new more polished video.