Difference between revisions of "User:Dt"

From CCRMA Wiki
Jump to: navigation, search
(motivation)
(sour mash)
Line 1: Line 1:
 
= sour mash=
 
= sour mash=
 
+
The name is partially derived from the idea of mashing up songs, except the method I use can vary from unintelligibly small to slightly longer and more intelligible snippets which may or may not give it an acerbic flavor. The other reason is that sour mash is one method used to produce the nectar of the gods, whiskey. I also like to think about this synthesis/exploratory process as a "distillation" of sorts.
  
 
== motivation ==
 
== motivation ==
Line 10: Line 10:
 
== software design ==
 
== software design ==
 
To speed up the database, I implemented a variation of Sqlite that indexes rows through a range query, (typically used for spatial searches.) The 5 features that I selected (rather arbitrarily) were the first 5 MFCCs. Because C++ is the Dale Earnhardt (RIP) of the programming circuit, I had to abandon my beloved Python. I used a JUCE GUI to deal with buttons, sliders, boxes, and even a little bit of OpenGL. I also relied on JUCE's sliders to sidestep the finicky process of parameter selection. In particular, it's hard to find the optimal range to use be searching for points in the DB. If the range is too narrow, you won't find anything. If it's too large, the audio buffer slows down. I demonstrate the software with three source songs (in the database) and three input songs. Ideally, I'd be able to scale everything up real big. But alas, for another day.
 
To speed up the database, I implemented a variation of Sqlite that indexes rows through a range query, (typically used for spatial searches.) The 5 features that I selected (rather arbitrarily) were the first 5 MFCCs. Because C++ is the Dale Earnhardt (RIP) of the programming circuit, I had to abandon my beloved Python. I used a JUCE GUI to deal with buttons, sliders, boxes, and even a little bit of OpenGL. I also relied on JUCE's sliders to sidestep the finicky process of parameter selection. In particular, it's hard to find the optimal range to use be searching for points in the DB. If the range is too narrow, you won't find anything. If it's too large, the audio buffer slows down. I demonstrate the software with three source songs (in the database) and three input songs. Ideally, I'd be able to scale everything up real big. But alas, for another day.
 +
 +
== screen shot! ==
 +
== future directions ==
 +
As I hinted in my demonstration (by sampling Getting Better by the Beatles, Faster by Janelle Monae, and Stronger by the Kanye West,) it needs to be better, faster, stronger. Mostly faster. If I was a little smarter about threading, about overlap adding, and if I got my database up to snuff, this could be much more compelling. I considered getting rid of the database altogether and implementing some sort of tree or multidimensional data structure. It'd also benefit from some sort of dimension reduction algorithm (PCA would've been nice) to better separate the data within the feature space.
 +
From a more experiential standpoint, interaction with the visualizer could allow the user to actively explore the feature space. Some ideas are to provide ways for the user to sonify samples, clusters of samples, or even synthesize new pieces by drawing paths through the feature space. The potential for sprawl is endless.

Revision as of 16:06, 15 December 2011

sour mash

The name is partially derived from the idea of mashing up songs, except the method I use can vary from unintelligibly small to slightly longer and more intelligible snippets which may or may not give it an acerbic flavor. The other reason is that sour mash is one method used to produce the nectar of the gods, whiskey. I also like to think about this synthesis/exploratory process as a "distillation" of sorts.

motivation

The sour mash spun out of my desire to better understand feature representations of music. MIR researchers have found that by incorporating larger and larger feature sets, performance on classification and retrieval tasks have increased. While this is not all that surprising, I've been frustrated by the kitchen-sink approach to analyzing music. Maybe it's the fact that I'm not a pure engineer. Maybe it's because I believe that if we better understand the current limitations of our features sets we will be able to build better features. Either way, I wanted to dig into the the feature representations of music that we know and love. In the process, I think I may have stumbled across a pretty cool way to make music.

experimental design

My initial goal was to create a database of "source songs" that would then be used to re-synthesize an input track in real time. I naively decided to use 25-50 columns, each of which would correspond to a feature of some small sample of audio. Ignoring the difficulty of real-time audio, I began by extracting 20 MFCC (Mel Frequency Cepstral Coefficients) and storing them in a Sqlite database. Then I tried to read in a new track, extract 20 MFCCs over the audio buffer, query the database for the closest match, and pull audio from the source song into the output buffer. This was a miserable failure. Every single aspect of the system was too slow. Like Michelangelo chiseling David out of a 9 foot tall piece of marble, I began paring down the system I started with until it was functional in real time. The final system uses a database of a whopping 3 source songs and 5(!) features. For each track, I store audio buffers of varying sizes so I could experiment with shorter and longer sample sizes. It also visualizes a subset of the database in 3 dimensions to give the user a better idea of what the feature space looks like.

software design

To speed up the database, I implemented a variation of Sqlite that indexes rows through a range query, (typically used for spatial searches.) The 5 features that I selected (rather arbitrarily) were the first 5 MFCCs. Because C++ is the Dale Earnhardt (RIP) of the programming circuit, I had to abandon my beloved Python. I used a JUCE GUI to deal with buttons, sliders, boxes, and even a little bit of OpenGL. I also relied on JUCE's sliders to sidestep the finicky process of parameter selection. In particular, it's hard to find the optimal range to use be searching for points in the DB. If the range is too narrow, you won't find anything. If it's too large, the audio buffer slows down. I demonstrate the software with three source songs (in the database) and three input songs. Ideally, I'd be able to scale everything up real big. But alas, for another day.

screen shot!

future directions

As I hinted in my demonstration (by sampling Getting Better by the Beatles, Faster by Janelle Monae, and Stronger by the Kanye West,) it needs to be better, faster, stronger. Mostly faster. If I was a little smarter about threading, about overlap adding, and if I got my database up to snuff, this could be much more compelling. I considered getting rid of the database altogether and implementing some sort of tree or multidimensional data structure. It'd also benefit from some sort of dimension reduction algorithm (PCA would've been nice) to better separate the data within the feature space. From a more experiential standpoint, interaction with the visualizer could allow the user to actively explore the feature space. Some ideas are to provide ways for the user to sonify samples, clusters of samples, or even synthesize new pieces by drawing paths through the feature space. The potential for sprawl is endless.