Difference between revisions of "256a-fall-2009/kmontag-final-project"

From CCRMA Wiki
Jump to: navigation, search
(Milestones)
 
(One intermediate revision by the same user not shown)
Line 1: Line 1:
 
= Intuition =
 
= Intuition =
Kevin Montag's Music 256A Final Project Proposal, Fall 2009
+
Kevin Montag's Music 256A Final Project, Fall 2009
  
 
== Concept ==
 
== Concept ==
The vision for the project is to make an interface which pulls sonic "qualities" from a collection of sounds, and applies them to new soundsA user might, for example, like the shimmer of a particular album, or the darkness of a particular genre, and wish to apply these qualities to a piece of their own.
+
Intuition is a program that seeks to make audio production more intuitiveIt's focused around transforming existing audio to give it desired perceptual qualities - a particular smoothness, brightness, etc.
  
I see the finished product more as an instrument than an audio plugin - the user should be able to make everything from subtle changes to a recorded piece, to complete distortions of a sound sample, but in a way that is sonically intuitive without too much of a learning curve.
+
The user starts by specifying some songs that she likes, and what she likes about them - the great shimmer to that new Jay-Z single, or the starkness of that old Johnny Cash ballad.  Then she specifies the tools she wants to use to achieve that kind of sound, in the form of LADSPA plugins.  Finally, she specifies some sonic qualities that she thinks will be relevant when the program is trying to figure out what to do with those plugins when faced with sounds it hasn't seen before.
 +
 
 +
Intuition then uses machine learning algorithms to deduce a map from qualities of an input sound to parameters for the LADSPA plugins, that will allow it to take arbitrary inputs, and make them sound more like that Johnny Cash song in real time.  Or, depending on the plugin and feature choice, it may become an instrument of its own.
  
 
== Interface ==
 
== Interface ==
The user will specify collections of sound files to be used as "seeds" for the audio transformations. Each collection will show up as an icon in the main window of the interface, and the user can click on the icon to edit the collection (add and remove sounds from it), or click somewhere else to add a new collectionThese collections can be saved and loaded.
+
Intuition has a relatively minimal UI.  The user makes four selections - input features, plugins to use, songs to mimic, and features of those songs to target - and then presses "Go!The algorithmm does its work to learn a mapping - this is fairly time-consuming - and then opens up JACK ports at the inputs and outputs of the plugin chain that the user has specified.  While audio is being fed to the program, the values of control parameters for the plugins are displayed, as well as a happy or sad face to let the user know how well the program is doing at matching her desired featuresThe program does its best to display all the necessary information without feeling cluttered.
  
The program will be JACK-aware; for each instance of the program, the user will choose a single input to which the transformation will be applied, and a single output to which it will be sent.
+
In the future, options will be added to fine-tune the parameters of the machine learning algorithm straight from the interface, but these wouldn't be placed in a prominent spot.
  
The main window will consist of one section containing the available collections, and another section containing the "active" collectionsThe user drags collection icons in and out of the active space, and then clicks on an icon in the active space to specify the ways in which that collection should be used to affect the soundWhen the user clicks on an active collection, I'm envisioning a set of sliders which can be used to say how much each particular audio parameter (shimmer, etc) should be "influenced" by that collection.
+
The other main UI improvement for the future is to add the ability to set ranges for individual plugin parametersCurrently, plugin parameters can take on any values, so there are often large changes in the sound over small periods of time if parameters are not chosen very carefully.
 +
 
 +
== Learning ==
 +
A number of machine learning algorithms are used in Intuition.  The primary one is a locally-weighted linear regression, which can be performed in real time once data points have been providedApplying input-specific weights to our training data allows us to avoid assuming that the mapping from input features to plugin parameters is linear - it clearly isn't!
 +
 
 +
The program can also perform unsupervised clustering of features to extract more than one "quintessential" point in feature-space from a given piece of target audio, and can learn using logistic regression to choose which of these points to target given a new piece of input.
 +
 
 +
The most challenging computational aspect of the program, though, turns out not to be the learning algorithms, but rather generating data points for them - the parameter space for most plugins is very large, and it's hard to find an optimum value.  The program finds optima by computing the values of target features for random points in the parameter space, and then performing gradient ascent seeded with the point that looks most promising - this works to an extent, but is still susceptible to local optima in the space.
  
 
== Design ==
 
== Design ==
Sound qualities will be applied by taking short-time FFTs of the incoming signal, and applying transformations that make each FFT more closely "match" the specified collection with respect to some particular quality of the soundThe matching will be performed using an algorithm that I'll be designing as part of my CS229 final project.
+
The program is designed with extensibility in mind.  Currently features are extracted from audio using functions from libxtract, but it's easy to create new features (by subclassing off of an abstract feature class) and integrate them seamlessly with the program.  Algorithms and parameters are modular, and interfaces to external libraries and protocols are presented in terms of the abstractions that are "native" to Intuition.  The algorithmic side of the program is very much independent of its UI; it was designed with the intention that it could be used as a standalone library for other applications.
 +
 
 +
The program is also designed for performance, since it needs to do some heavy lifting in real time.  For a given piece of audio, features are computed only once, and libxtract's philosophy of "cascading" features is built into the abstractions for audio objectsNumerical corners are cut where appropriate in the learning algorithms to speed them up.
 +
 
 +
Intuition uses a number of external libraries and APIs: libxtract and FFTW for feature extraction, libsndfile for reading training data, JACK for outputting and receiving real-time audio, LADSPA for transforming audio, and GTK for its UI.
  
 
== Milestones ==
 
== Milestones ==
 +
The milestones which were accomplished for the project were, in approximately chronological order:
 +
 
Milestone 1:
 
Milestone 1:
 
Get a framework up and running for reading/writing files, extracting features, processing collections of files, etc.
 
Get a framework up and running for reading/writing files, extracting features, processing collections of files, etc.
  
 
Milestone 2:
 
Milestone 2:
Implement transformation of input files via convolution with centroids of another collection.
+
Figure out the "extent" of the program - better define the user experience and the program's capabilities.
  
 
Milestone 3:
 
Milestone 3:
Get a user interface that displays connection information, allows for editing/creation of genres, displays the user's available LADSPA plugins, and gives feedback about JACK connection information.
+
Get a user interface up and running that allows the input features, plugins, target files, and target features to be specified.
  
 
Milestone 4:
 
Milestone 4:
Link the UI with the feature-based synthesis work I'll be doing in 229.
+
Design an API for linking the interface with the computational backend, refactor existing code.
  
 
Milestone 5:
 
Milestone 5:
Polish everything up.
+
Implement the computational backend.
 +
 
 +
Milestone 6:
 +
Integrate the UI and learning algorithms, revising the API and incrementally redesigning the backend with performance in mind.
 +
 
 +
Milestone 7:
 +
Polish the UI, and add new ways of giving feedback.
 +
 
 +
== Thanks ==
 +
to Matt Hoffman for some good advice regarding the problem of generating data points for the learning algorithms!
 +
to Ge Wang for helping me turn a nebulous idea into a concrete one!

Latest revision as of 11:31, 10 December 2009

Intuition

Kevin Montag's Music 256A Final Project, Fall 2009

Concept

Intuition is a program that seeks to make audio production more intuitive. It's focused around transforming existing audio to give it desired perceptual qualities - a particular smoothness, brightness, etc.

The user starts by specifying some songs that she likes, and what she likes about them - the great shimmer to that new Jay-Z single, or the starkness of that old Johnny Cash ballad. Then she specifies the tools she wants to use to achieve that kind of sound, in the form of LADSPA plugins. Finally, she specifies some sonic qualities that she thinks will be relevant when the program is trying to figure out what to do with those plugins when faced with sounds it hasn't seen before.

Intuition then uses machine learning algorithms to deduce a map from qualities of an input sound to parameters for the LADSPA plugins, that will allow it to take arbitrary inputs, and make them sound more like that Johnny Cash song in real time. Or, depending on the plugin and feature choice, it may become an instrument of its own.

Interface

Intuition has a relatively minimal UI. The user makes four selections - input features, plugins to use, songs to mimic, and features of those songs to target - and then presses "Go!" The algorithmm does its work to learn a mapping - this is fairly time-consuming - and then opens up JACK ports at the inputs and outputs of the plugin chain that the user has specified. While audio is being fed to the program, the values of control parameters for the plugins are displayed, as well as a happy or sad face to let the user know how well the program is doing at matching her desired features. The program does its best to display all the necessary information without feeling cluttered.

In the future, options will be added to fine-tune the parameters of the machine learning algorithm straight from the interface, but these wouldn't be placed in a prominent spot.

The other main UI improvement for the future is to add the ability to set ranges for individual plugin parameters. Currently, plugin parameters can take on any values, so there are often large changes in the sound over small periods of time if parameters are not chosen very carefully.

Learning

A number of machine learning algorithms are used in Intuition. The primary one is a locally-weighted linear regression, which can be performed in real time once data points have been provided. Applying input-specific weights to our training data allows us to avoid assuming that the mapping from input features to plugin parameters is linear - it clearly isn't!

The program can also perform unsupervised clustering of features to extract more than one "quintessential" point in feature-space from a given piece of target audio, and can learn using logistic regression to choose which of these points to target given a new piece of input.

The most challenging computational aspect of the program, though, turns out not to be the learning algorithms, but rather generating data points for them - the parameter space for most plugins is very large, and it's hard to find an optimum value. The program finds optima by computing the values of target features for random points in the parameter space, and then performing gradient ascent seeded with the point that looks most promising - this works to an extent, but is still susceptible to local optima in the space.

Design

The program is designed with extensibility in mind. Currently features are extracted from audio using functions from libxtract, but it's easy to create new features (by subclassing off of an abstract feature class) and integrate them seamlessly with the program. Algorithms and parameters are modular, and interfaces to external libraries and protocols are presented in terms of the abstractions that are "native" to Intuition. The algorithmic side of the program is very much independent of its UI; it was designed with the intention that it could be used as a standalone library for other applications.

The program is also designed for performance, since it needs to do some heavy lifting in real time. For a given piece of audio, features are computed only once, and libxtract's philosophy of "cascading" features is built into the abstractions for audio objects. Numerical corners are cut where appropriate in the learning algorithms to speed them up.

Intuition uses a number of external libraries and APIs: libxtract and FFTW for feature extraction, libsndfile for reading training data, JACK for outputting and receiving real-time audio, LADSPA for transforming audio, and GTK for its UI.

Milestones

The milestones which were accomplished for the project were, in approximately chronological order:

Milestone 1: Get a framework up and running for reading/writing files, extracting features, processing collections of files, etc.

Milestone 2: Figure out the "extent" of the program - better define the user experience and the program's capabilities.

Milestone 3: Get a user interface up and running that allows the input features, plugins, target files, and target features to be specified.

Milestone 4: Design an API for linking the interface with the computational backend, refactor existing code.

Milestone 5: Implement the computational backend.

Milestone 6: Integrate the UI and learning algorithms, revising the API and incrementally redesigning the backend with performance in mind.

Milestone 7: Polish the UI, and add new ways of giving feedback.

Thanks

to Matt Hoffman for some good advice regarding the problem of generating data points for the learning algorithms! to Ge Wang for helping me turn a nebulous idea into a concrete one!