Difference between revisions of "MIR workshop 2008"

From CCRMA Wiki
Jump to: navigation, search
Line 205: Line 205:
 
* [http://chuck.stanford.edu/ Chuck @ Stanford]
 
* [http://chuck.stanford.edu/ Chuck @ Stanford]
 
* [http://electro-music.com/forum/forum-140.html the electro-music.com ChucK forum]
 
* [http://electro-music.com/forum/forum-140.html the electro-music.com ChucK forum]
* [http://www.youtube.com/watch?v=2rpk461T6l4 youtube video of one of Ge's lectures on ChucK]
+
* [http://www.youtube.com/watch?v=2rpk461T6l4 Ge presents ChucK to the Stanford HCI seminar]
 
* [http://slork.stanford.edu/ Slork, the Stanford Laptop Orchestra]
 
* [http://slork.stanford.edu/ Slork, the Stanford Laptop Orchestra]

Revision as of 18:38, 28 July 2008

CCRMA Workshop: Music Information Retrieval

NOTE: THIS PAGE IS A WORK IN PROGRESS -- IT IS BY NO MEANS, COMPLETE YET. Thanks!

logistics

workshop title

  • "Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval"

workshop outline

  • Administration
    • Introductions
    • CCRMA Overview
    • Final Projects
  • Introduction to Capabilities and Applications of MIR
  • Basic System Overview
    • Timing and Segmentation
    • Onset
    • Beat & Tempo
    • Zero Crossing Rate
    • Classification: Heuristic Analysis
    • k-NN
  • Survey of the field, real-world applications, MIR research, and challenges
  • Current commercial applications
    • Searching, Query Systems, etc
    • Playlisting systems
    • Select commercial MIR projects
    • Academic MIR research projects

Feature Extraction

  • Low Level Features
    • "Classic" Spectral features (Centroid, Flux, RMS, Rolloff, Flatness, Kurtosis)
    • Spectral bands
    • Zero Crossing
    • Chroma bins
    • Spectral Bands / Filters
    • MFCC
    • MPEG-7
  • Higher-level features
    • Key Estimation
    • Chord Estimation
    • Genre (genre, artist ID, similarity)
    • "Fingerprints"

Rhythm Analysis

  • Onset Detection
  • Beat Detection
  • Meter detection

Data Reduction Techniques

  • Feature Selection
  • PCA / LDA

Structure and Segmentation

  • Structural Analysis and Segmentation

Classification Algorithms

  • k-NN
  • SVM
  • HMM
  • Neural Nets

Classification

  • concept and design
  • genre-classification
  • similarity retrieval
  • instrument/speaker/source identification

Evaluation Methodology

  • Data set construction
  • Practical Classifier techniques
  • Feature selection
  • Cross Validation
  • Information Retrieval metrics (precision, recall, F-Measure)

Some Tentative Lab Exercises

  • Feature extraction from audio
  • Classification tasks
  • Prototyping real-time MIR algorithms and systems with ChucK/UAna
  • Feature Extraction: Sort & Play slices via feature values. Add cowbell on just a certain beat, note, or word. Do a simple transcription (via thresholds) of a loop.
  • Load a folder of sample-A examples, then Sample-B examples. Feature extract each and build a classifier. Then, load a "test example" and compare it against the A and B via classifier. Look at classification and distance from examples as probability.
  • Building an Instrument Identifier Tool using source audio material
  • Organization of data sets and Evaluating system accuracy
  • Speaker change detection
  • Clustering Techniques Demo: Song Segmentation, Drum Transcription
  • Dan Ellis' Practicals
  • Investigate the classifier accuracy as the number of Gaussian mixture components

Possible guest lecturers/visits from a local music information retrieval start-ups.

potential software, libraries, examples

  • MATLAB
  • ChucK / UAna
  • Sonic Visualizer [1]
  • Marsyas
  • CLAM
  • Machine Learning Libraries
  • Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)
  • Netlab Pattern Recognition and Clustering Toolbox (Matlab)
  • libsvm SVM toolbox (Matlab)
  • MA Toolbox / MIDI Toolbox
  • MIR Toolboxes (Matlab)
  • [see also below references]

Abstract

This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.

MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to “listen” and “understand or make sense of” audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music – tempo, key, chord progressions, genre, or song structure – MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.

This workshop will target students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as onset timings and chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.

The workshop will consist of half-day lectures, half-day supervised lab sessions, classroom exercises, demonstrations, and discussions.

Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.

Knowledge of basic digital audio principles and familiarity with basic programming (Matlab, C/C++, and/or ChucK) will be useful. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.

References for additional info

Tools:

Toolboxes to explore:

Online Tutorials / Course materials:

Papers:

Tempo Papers

Clustering

Evaluation

Books - NOT YET REVIEWED

   * Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.
   * Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.
   * "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.
   * Machine Learning, Tom Mitchell, McGraw Hill, 1997.

Music Recommendation and Discovery


Student Projects

Audio Source Material

OLPC Sound Sample Archive (8.5 GB) [2]

RWC Music Database (n DVDs) [available in Stanford Music library]

RWC - Sound Instruments Table of Contents

http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html

Final Projects


Installing Toolboxes

In your Home folder, create a folder called "Matlab" Download libsvm to your local Matlab folder Within the libsvm folder, open the file Makefile On the 2nd line, change /usr/local/matlab to /opt/matlabR2006b

Open a terminal cd to that folder type make

MATLAB Utility Scripts

ChucK