MIR workshop 2015

From CCRMA Wiki
Revision as of 09:34, 17 July 2015 by Kiemyang (Talk | contribs) (Day 5: Beat, Rhythm,)

Jump to: navigation, search

Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval

News

Wednesday, July 15

8:48 am:

  • Today: Zafar Rafii, Jeff Scott, Aneesh Vartakavi, et al. of Gracenote will join us for lunch and for guest lectures in the afternoon.
  • If you checked out https://github.com/stevetjoa/stanford-mir onto your local machine, be sure to git checkout gh-pages before working.

Tuesday, July 14

9:31 am:

  • Don't forget %matplotlib inline at the top of your notebooks.

Monday, July 13

2:18 pm: dependencies:

  • apt-get install: git, python-dev, pip, python-scipy, python-matplotlib
  • Python packages: pip, boto, boto3, matplotlib, ipython, numpy, scipy, scikit-learn, librosa, mir_eval, seaborn, requests
  • (Anaconda)

11:11 am: Your post-it notes:

  • content-based analysis e.g. classifying violin playing style (vibrato, bowing)
  • MIR overview; music recommendation
  • feature extraction; dimensionality reduction; prediction
  • source separation techniques
  • chord estimation; "split" musical instruments; find beats in a song
  • audio-to-midi; signal/source/speaker separation; programming audio in Python (in general)
  • acoustic fingerprinting
  • machine learning; turn analysis -> synth; music characterization
  • beat tracking; ways of identifying timbre
  • mood recognition
  • instrument separation; real-time processing
  • Marsyas?
  • speed of retrieval
  • what's possible and what's not in music information retrieval; how to use MIR toolbox for fast realization of ideas
  • machine learning techniques for more general audio problems i.e. language detection or identifying sound sources
  • networking and getting to know you all

Logistics

Abstract

How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?

This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.

MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.

This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.

Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.

Workshop Structure: The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.

Schedule

Instructional material can be found at musicinformationretrieval.com (read only) or on GitHub (full source).

Day 1: Introduction to MIR, Signal Analysis, and Feature Extraction

Lecture

Introductions

  • CCRMA Introduction - (Nette, Fernando).
  • Introduction to MIR (What is MIR? Why MIR? Commercial applications)
  • Basic MIR system architecture
  • Timing and Segmentation: Frames, Onsets
  • Classification: Instance-based classifiers (k-NN)

Overview: Signal Analysis and Feature Extraction for MIR Applications

MFCCs sonified

  • Original track ("Chewing Gum")