MIR workshop 2014

From CCRMA Wiki
Revision as of 08:58, 26 June 2014 by Leighs (Talk | contribs) (Day 4: Music Information Retrieval in Polyphonic Mixtures)

Jump to: navigation, search

Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval


Logistics

Workshop Title: Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval

Abstract

How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing? This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.

MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.

This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.

Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.

Workshop structure: The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.

Schedule: Lectures & Labs

Day 1: Introduction to MIR, Signal Analysis and Feature Extraction

Presenters: Jay LeBoeuf, Leigh Smith

Glossary of Terms to be used in this course <work in progress>


Day 1: Part 1 Lecture 1 Slides

  • Introductions
  • CCRMA Introduction - (Nette, Carr, Fernando).
  • Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR)
  • Overview of a basic MIR system architecture
  • Timing and Segmentation: Frames, Onsets
  • Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")
  • Classification: Instance-based classifiers (k-NN)
  • Information Retrieval Basics (Part 1)
    • Classifier evaluation (Cross-validation, training and test sets)


Day 1: Part 2 Lecture 2 Slides

  • Overview: Signal Analysis and Feature Extraction for MIR Applications
  • Windowed Feature Extraction
    • I/O and analysis loops
  • Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)
    • Kinds/Domains of Features
    • Application Requirements (labeling, segmentation, etc.)
  • Time-domain features (MPEG-7 Audio book ref)
    • RMS, Peak, LP/HP RMS, Dynamic range, ZCR
  • Frequency-domain features
    • Spectrum, Spectral bins
    • Spectral measures (Spectral statistical moments)
    • Pitch-estimation and tracking
    • MFCCs
  • Spatial-domain features
    • M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources

MFCCs Sonified
Original track ("Chewing Gum"): [1]
MFCCs only [2]



Lab 1:

  • Application: Instrument recognit