MIR workshop 2009
- 1 Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval
- 1.1 logistics
- 1.2 Abstract
- 1.3 Workshop syllabus
- 1.4 software, libraries, examples
- 1.5 Lectures & Labs 2009 - WORK IN PROGRESS - DRAFT ONLY
- 1.6 Jay's Lectures 2008
- 1.7 Labs
- 1.8 References for additional info
- 1.9 Audio Source Material
- 1.10 THE WIKI VALUE-ADD - Supplemental information for the lectures...
- 1.11 MATLAB Utility Scripts
Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval
Workshop Title: "Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval"
How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing?
This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.
MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.
This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.
Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.
Workshop structure: The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.
- CCRMA Overview
- Introduction to Capabilities and Applications of MIR
- Why MIR?
- Overview of potential research and commercial applications
- Basic System Overview and Architecture
Timing and Segmentation
- Frames and Windows
- Onset Detection
- Beat & Tempo Extraction
- Low Level Features
- Zero Crossing
- Temporal centroid, Log Attack time, Attack slope
- Spectral features (Centroid, Flux, RMS, Rolloff, Flatness, Kurtosis, Brightness)
- Spectral bands
- Log spectrogram
- Chroma bins
- Higher-level features
- Key Estimation
- Chord Estimation
- Genre (genre, artist ID, similarity)
Analysis / Decision Making
- Heuristic Analysis
- Clustering and probability density models
Model / Data Preparation Techniques
- Data Preparation
- Scaling data
- Model organization
- concept and design
- Data set construction and organization
- Feature selection
- Cross Validation
- Information Retrieval metrics (precision, recall, F-Measure)
Plus guest lectures/visits from academic experts and real-world folks.
software, libraries, examples
Applications & Environments
- ChucK / UAna
- Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)
- Sonic Visualizer
Machine Learning Libraries & Toolboxes
- Netlab Pattern Recognition and Clustering Toolbox (Matlab)
- libsvm SVM toolbox (Matlab)
- MIR Toolboxes (Matlab)
- MA Toolbox
- MIDI Toolbox
- [see also below references]
- UCSD CatBox
- Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/
- Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/
- HTK http://htk.eng.cam.ac.uk/
Lectures & Labs 2009 - WORK IN PROGRESS - DRAFT ONLY
Notes: Break the first day lab into two group - a) folks that need DSP and/or Matlab tutoring b) folks than want to dive right in. Motivation: Each day, we expand on the basic system of "segmenting audio -> feature extraction -> classification", giving students additional tools and techniques to tackle increasingly more difficult challenges.
- CCRMA Introduction (Carr/Sasha) -J/K
- Introduction to MIR (What is MIR? Why are people interested?) -J
- Overview of a basic MIR system architecture -J
- Timing and Segmentation: Frames, Onsets -K
- Overview of frequency-based onset detection -K
- Features: ZCR, Spectral moments -K
- Classification: Using simple heuristics and thresholds -J
Lab 1 Students who need a personal tutorial of Matlab or audio signal processing will split off and received small group assistance to bring them up to speed.
- Lab 1 -"Playing with audio slices" - Jay to trim this lab down considerably.
- Onset detection
- Background for students needing a refresher:
- STFT and time-frequency representation of audio Lab1_4.m
- Temporal Analysis: - K
- Sub-Band Analysis
- Post-Processing, Peak-Picking
- Tempo estimation, beat tracking
- Features: Additional spectral features (Spread, Flatness) -J
- Scaling of feature data -J
- Classification: Instance-based classifiers (such as k-NN and distance metrics) -J
- Tempo estimation, beat tracking
- Extract new features
- Build simple classifiers using those features - Jay to trim down this lab considerably
Day 3 - Harmony: Key, Chord Estimation
- Features: Timbral features; Octave-bands -J
- Features: Spectral Envelopes, MFCCs) -J
- Classification: Unsupervised classification (k-means) -J
- Chroma Representation -K
- Key-Profile and Key Estimation -K
- Chord Recognition -K
- Structural Analysis 1 -K
- Similarity Matrix
- Novelty Score
- Music Segmentation
- New Classifier: GMM -J
- Classification examples: -J
- Genre Classification
- Instrument Identification
- Speech/Music Discrimination
- Building and evaluating systems - assembling testing and training sets - J
- IR Methodologies (Cross-validation, training and test sets) - K/J
- Classification: SVM -J
- IR Evaluation Metrics (precision, recall, f-measure, AROC,...) -K
- Practical tips & tricks -K/J
- Time permitting:Walkthrough of example MIR systems (a simple content-based music recommender, audio fingerprinting, etc)
Jay's Lectures 2008
Lecture 6 - One-class SVM, nu parameter, accuracy, cross-validation, evaluation metrics, assembling training and testing data, probabilistic clustering with GMMs, GMM parameters, distance measures between PDFs, Expectation-Maximization, Artist and Genre classification.
Abstract: This lab will introduce you to the practice of analyzing, segmenting, feature extracting, and applying basic classifications to audio files. Our future labs will build upon this essential work -but will use more sophisticated training sets, features, and classifiers.
Abstract: My first audio classifier: introducing K-NN! We can now appreciate why we need additional intelligence in our systems - heuristics can't very far in the world of complex audio signals. We'll be using Netlab's implementation of the k-NN for our work here. It proves be a straight-forward and easy to use implementation. The steps and skills of working with one classifier will scale nicely to working with other, more complex classifiers. We're also going to be using the new features in our arsenal: cherishing those "spectral moments" (centroid, bandwidth, skewness, kurtosis) and also examining other spectral statistics.
Abstract: As we anecdotally observed in yesterday's lab, several parameters can affect the quality of our classifications.
- Audio files used in the training data sets.
- Features used in training / testing.
- Use of scaling for features.
- The size of the frames extracted from the audio.
We'll gain an intuitive feel for each of the effect of each of these parameters by trying some simple experiments. Afterwards, we'll dive into a lab on extracting tonal information from you audio streams
Abstract: Sometimes, an unsupervised learning technique is preferred. Perhaps you do not have access to adequate training data, the classifications for the training data's labels events are not completely clear, or you just want to quickly sort real-world, unseen, data into groups based on it's feature similarity. Regardless of your situation, clustering is a great option! Lab also introduces MFCCs as a main measure of timbral similarity.
Abstract: In this lab, you'll learn how to scale, format data, and find the optimum parameters for binary classification Support Vector Machines. We'll train/build models and test them on real-world data.
Abstract: By the end of this lab, you will understand the how to use GMM models - a probabilistic clustering and "soft classification" technique.
Abstract: By the end of this lab, you should have a skeletal outline of where your project is going to go, what data you will use, what features, classifiers / techniques you will use, and what your metric of success / goal is. Additionally, the lab includes a walk through of cross-validation techniques for measuring your accuracy and a list of helpful HMM functions.
References for additional info
- Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)
- Netlab by Ian T. Nabney (includes software)
- Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)
- Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)
- Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000
Prerequisite / background material:
- The Mathworks' Matlab Tutorial
- ISMIR2007 MIR Toolbox Tutorial
- Check out the references listed at the end of the Klapuri & Davy book
- Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1
Other books (not necessary reviewed by the instructors yet):
- Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop
- Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.
- Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.
- "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.
- Machine Learning, Tom Mitchell, McGraw Hill, 1997.
Audio Source Material
OLPC Sound Sample Archive (8.5 GB) 
RWC Music Database (n DVDs) [available in Stanford Music library]
THE WIKI VALUE-ADD - Supplemental information for the lectures...
MATLAB Utility Scripts