Difference between revisions of "MIR workshop 2008"

From CCRMA Wiki
Jump to: navigation, search
(software, libraries, examples)
(logistics)
Line 4: Line 4:
  
 
== logistics ==
 
== logistics ==
* Summer 2008
+
* 9-5 PM.  July 21 through August 1 2008
 
* Instructors: [http://www.imagine-research.com/ Jay LeBoeuf] and [http://ccrma.stanford.edu/~ge/ Ge Wang]
 
* Instructors: [http://www.imagine-research.com/ Jay LeBoeuf] and [http://ccrma.stanford.edu/~ge/ Ge Wang]
  

Revision as of 15:11, 29 July 2008

CCRMA Workshop: Music Information Retrieval

NOTE: THIS PAGE IS A WORK IN PROGRESS -- IT IS BY NO MEANS, COMPLETE YET. Thanks!

logistics

workshop title

  • "Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval"

workshop outline

  • Administration
    • Introductions
    • CCRMA Overview
    • Final Projects
  • Introduction to Capabilities and Applications of MIR
  • Basic System Overview
    • Timing and Segmentation
    • Onset
    • Beat & Tempo
    • Zero Crossing Rate
    • Classification: Heuristic Analysis
    • k-NN
  • Survey of the field, real-world applications, MIR research, and challenges
  • Current commercial applications
    • Searching, Query Systems, etc
    • Playlisting systems
    • Select commercial MIR projects
    • Academic MIR research projects

Feature Extraction

  • Low Level Features
    • "Classic" Spectral features (Centroid, Flux, RMS, Rolloff, Flatness, Kurtosis)
    • Spectral bands
    • Zero Crossing
    • Chroma bins
    • Spectral Bands / Filters
    • MFCC
    • MPEG-7
  • Higher-level features
    • Key Estimation
    • Chord Estimation
    • Genre (genre, artist ID, similarity)
    • "Fingerprints"

Rhythm Analysis

  • Onset Detection
  • Beat Detection
  • Meter detection

Data Reduction Techniques

  • Feature Selection
  • PCA / LDA

Structure and Segmentation

  • Structural Analysis and Segmentation

Classification Algorithms

  • k-NN
  • SVM
  • HMM
  • Neural Nets

Classification

  • concept and design
  • genre-classification
  • similarity retrieval
  • instrument/speaker/source identification

Evaluation Methodology

  • Data set construction
  • Practical Classifier techniques
  • Feature selection
  • Cross Validation
  • Information Retrieval metrics (precision, recall, F-Measure)

Some Tentative Lab Exercises

  • Feature extraction from audio
  • Classification tasks
  • Prototyping real-time MIR algorithms and systems with ChucK/UAna
  • Feature Extraction: Sort & Play slices via feature values. Add cowbell on just a certain beat, note, or word. Do a simple transcription (via thresholds) of a loop.
  • Load a folder of sample-A examples, then Sample-B examples. Feature extract each and build a classifier. Then, load a "test example" and compare it against the A and B via classifier. Look at classification and distance from examples as probability.
  • Building an Instrument Identifier Tool using source audio material
  • Organization of data sets and Evaluating system accuracy
  • Speaker change detection
  • Clustering Techniques Demo: Song Segmentation, Drum Transcription
  • Dan Ellis' Practicals
  • Investigate the classifier accuracy as the number of Gaussian mixture components

Possible guest lecturers/visits from a local music information retrieval start-ups.

software, libraries, examples

Applications & Environments

Machine Learning Libraries & Toolboxes

Optional Toolboxes

Abstract

This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.

MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to “listen” and “understand or make sense of” audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music – tempo, key, chord progressions, genre, or song structure – MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.

This workshop will target students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as onset timings and chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.

The workshop will consist of half-day lectures, half-day supervised lab sessions, classroom exercises, demonstrations, and discussions.

Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.

Knowledge of basic digital audio principles and familiarity with basic programming (Matlab, C/C++, and/or ChucK) will be useful. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.

Labs

Lab 1 - Playing with audio slices

Abstract: This lab will introduce you to the practice of analyzing, segmenting, feature extracting, and applying basic classifications to audio files. Our future labs will build upon this essential work -but will use more sophisticated training sets, features, and classifiers.

Lab 2 - My first audio classifier

Abstract: My first audio classifier: introducing K-NN! We can now appreciate why we need additional intelligence in our systems - heuristics can't very far in the world of complex audio signals. We'll be using Netlab's implementation of the k-NN for our work here. It proves be a straight-forward and easy to use implementation. The steps and skills of working with one classifier will scale nicely to working with other, more complex classifiers. We're also going to be using the new features in our arsenal: cherishing those "spectral moments" (centroid, bandwidth, skewness, kurtosis) and also examining other spectral statistics.

Lab 4 - Cluster Lab

Abstract: Sometimes, an unsupervised learning technique is preferred. Perhaps you do not have access to adequate training data, the classifications for the training data's labels events are not completely clear, or you just want to quickly sort real-world, unseen, data into groups based on it's feature similarity. Regardless of your situation, clustering is a great option! Lab also introduces MFCCs as a main measure of timbral similarity.

Lab 6 - Gaussian Mixture Models

Abstract: By the end of this lab, you will understand the how to use GMM models - a probabilistic clustering and "soft classification" technique.

Audio source material

Some helpful matlab scripts and utilities (Courtesy of the MIR Workshop 2008 summer students)

References for additional info

Tools:

Toolboxes to explore:

Online Tutorials / Course materials:

Papers:

Tempo Papers

Clustering

Evaluation

Books - NOT YET REVIEWED

   * Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.
   * Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.
   * "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.
   * Machine Learning, Tom Mitchell, McGraw Hill, 1997.

Music Recommendation and Discovery


Student Projects

Audio Source Material

OLPC Sound Sample Archive (8.5 GB) [1]

RWC Music Database (n DVDs) [available in Stanford Music library]

RWC - Sound Instruments Table of Contents

http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html

Final Projects


Installing Toolboxes

In your Home folder, create a folder called "Matlab" Download libsvm to your local Matlab folder Within the libsvm folder, open the file Makefile On the 2nd line, change /usr/local/matlab to /opt/matlabR2006b

Open a terminal cd to that folder type make

MATLAB Utility Scripts

ChucK

ChucK is a strongly-timed audio programming language that we will be using for real-time audio analysis.