Difference between revisions of "MIR workshop 2015"

From CCRMA Wiki
Jump to: navigation, search
(Day 3: Machine Learning, Clustering and Classification)
(Attendees)
 
(17 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
''' Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval '''
 
''' Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval '''
 +
 +
== News ==
 +
 +
'''Wednesday, July 15'''
 +
 +
8:48 am:
 +
 +
* Today: Zafar Rafii, Jeff Scott, Aneesh Vartakavi, et al. of Gracenote will join us for lunch and for guest lectures in the afternoon.
 +
* If you checked out https://github.com/stevetjoa/stanford-mir onto your local machine, be sure to '''git checkout gh-pages''' before working.
 +
 +
'''Tuesday, July 14'''
 +
 +
9:31 am:
 +
 +
* Don't forget '''%matplotlib inline''' at the top of your notebooks.
 +
 +
'''Monday, July 13'''
 +
 +
2:18 pm: dependencies:
 +
 +
* apt-get install: git, python-dev, pip, python-scipy, python-matplotlib
 +
* Python packages: pip, boto, boto3, matplotlib, ipython, numpy, scipy, scikit-learn, librosa, mir_eval, seaborn, requests
 +
* (Anaconda)
 +
 +
11:11 am: Your post-it notes:
 +
 +
* content-based analysis e.g. classifying violin playing style (vibrato, bowing)
 +
* MIR overview; music recommendation
 +
* feature extraction; dimensionality reduction; prediction
 +
* source separation techniques
 +
* chord estimation; "split" musical instruments; find beats in a song
 +
* audio-to-midi; signal/source/speaker separation; programming audio in Python (in general)
 +
* acoustic fingerprinting
 +
* machine learning; turn analysis -> synth; music characterization
 +
* beat tracking; ways of identifying timbre
 +
* mood recognition
 +
* instrument separation; real-time processing
 +
* Marsyas?
 +
* speed of retrieval
 +
* what's possible and what's not in music information retrieval; how to use MIR toolbox for fast realization of ideas
 +
* machine learning techniques for more general audio problems i.e. language detection or identifying sound sources
 +
* networking and getting to know you all
 +
 +
== Attendees ==
 +
 +
Eric Raymond <lowlifi@gmail.com>, Stelios Andrew Stavroulakis, Richard Mendelsohn, Naithan Bosse, Alessio Bazzica, Karthik Yadati, Martha Larson, Stephen Hartzog, Philip Lee, Jaeyoung Choi, Matthew Gallagher, Yule Wu, Mark Renker, Rohit Ainapure, Eric Tarr <erictarr@gmail.com>, Allen Wu, Aaron Hipple
  
 
== Logistics ==
 
== Logistics ==
Line 54: Line 100:
 
* ''Reminder'': Save all your work, because you may want to build on it in subsequent labs.
 
* ''Reminder'': Save all your work, because you may want to build on it in subsequent labs.
  
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===
+
=== Day 2: Pitch and Chroma Analysis; Machine Learning, Clustering and Classification ===
  
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf List of beat tracking references]
+
'''Lecture'''
  
Onset Detection
+
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM
* Time-domain differences
+
* Spectral-domain differences
+
* Perceptual data-warping
+
* Adaptive onset detection
+
 
+
Beat and Tempo
+
* IOIs and Beat Regularity, Rubato
+
* Tatum, Tactus and Meter levels
+
* Tempo estimation
+
* Onset-detection vs Beat-detection
+
* The Onset Detection Function
+
* Beat Histograms
+
* Fluctuation Patterns
+
* Joint estimation of downbeat and chord change
+
 
+
Approaches to Beat Tracking and Meter Estimation
+
* Autocorrelation
+
* Beat Spectrum measures
+
* Multi-resolution (Wavelet)
+
  
 
Pitch and Chroma
 
Pitch and Chroma
Line 93: Line 120:
 
** Query-by-humming  
 
** Query-by-humming  
 
** Music Transcription  
 
** Music Transcription  
 +
'''Lab'''
  
'''Lab'''
+
[http://musicinformationretrieval.com/knn_instrument_classification.html K-NN Instrument Classification]
  
Part 1: Tempo Extraction
+
[http://musicinformationretrieval.com/kmeans_instrument_classification.html MFCC, K-Means Clustering]
  
Part 2: Add in MFCCs to classification and test w Cross validation
+
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means (2012)]
 
+
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]
+
  
 
Bonus Slides: Temporal & Harmony Analysis  
 
Bonus Slides: Temporal & Harmony Analysis  
Line 108: Line 134:
 
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]
 
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]
  
=== Day 3: Machine Learning, Clustering and Classification ===
 
  
'''Lecture'''
+
=== Day 3: Deep Belief Networks; Pitch Transcription ===
  
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM
+
Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]
  
'''Lab'''
+
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]
  
[http://musicinformationretrieval.com/kmeans_instrument_classification.html MFCC, K-Means Clustering]
+
Pitch Transcription Exercise
 +
 
 +
Guest lectures by Gracenote
 +
 
 +
Catch-up from yesterday
  
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means (2012)]
 
  
 
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===
 
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===
  
Lecture 6: Steve Tjoa, [http://ccrma.stanford.edu/workshops/mir2013/ccrma20130627.pdf Lecture 6 Slides]
+
'''Lecture'''
  
* Music Transcription and Source Separation
+
Music Transcription and Source Separation
 
* Nonnegative Matrix Factorization
 
* Nonnegative Matrix Factorization
 
* Sparse Coding
 
* Sparse Coding
  
Guest Lecture 7: Andreas Ehmann, MIREX <br>
+
Evaluation Metrics for Information Retrieval
  
Lecture 8: Evaluation Metrics for Information Retrieval - Leigh Smith [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_IR.pdf Slides]
+
'''Lab'''
  
 
+
[https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]
'''Lab 4'''
+
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]
+
  
 
References:  
 
References:  
Line 141: Line 167:
 
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]
 
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]
  
=== Day 5: Deep Belief Networks and Wavelets ===
 
  
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]
+
=== Day 5: Hashing for Music Search and Retrieval  ===
  
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]
+
Locality Sensitive Hashing ([http://musicinformationretrieval.com/lsh_fingerprinting.html notebook])
 
+
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]
+
  
 
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]
 
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]
  
Klapuri eBook:  http://link.springer.com/book/10.1007%2F0-387-32845-9
+
== Software Libraries ==
  
Afternoon: CCRMA Lawn BBQ
+
* [https://www.python.org/ Python]
 
+
* [http://www.numpy.org/ NumPy]
== software, libraries, examples ==
+
* [http://www.scipy.org/ SciPy]
Applications & Environments
+
* [http://ipython.org/ IPython]
* [http://www.mathworks.com/products/matlab/ MATLAB]
+
* [http://scikit-learn.org/stable/ scikit-learn]
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)]  
+
* [http://bmcfee.github.io/librosa/ librosa]
 
+
* [http://craffel.github.io/mir_eval/ mir_eval]
Machine Learning Libraries & Toolboxes
+
* [http://essentia.upf.edu/ Essentia]
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]
+
* [http://www.vamp-plugins.org/vampy.html VamPy]
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)]  
+
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]
+
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]
+
Optional Toolboxes
+
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]
+
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox]
+
* [see also below references]
+
* [http://marsyas.sness.net/ Marsyas]
+
* CLAM
+
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/
+
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/
+
* HTK http://htk.eng.cam.ac.uk/
+
  
 
== Supplemental papers and information for the lectures...==
 
== Supplemental papers and information for the lectures...==
Line 188: Line 199:
 
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]
 
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]
  
== References for additional info ==  
+
== Additional References ==  
 +
 
 
Recommended books:  
 
Recommended books:  
 
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)
 
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)
Line 196: Line 208:
 
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000  
 
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000  
  
Prerequisite / background material:  
+
Background material:  
 
* http://140.114.76.148/jang/books/audioSignalProcessing/
 
* http://140.114.76.148/jang/books/audioSignalProcessing/
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]
 
 
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]
 
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]
  
Line 217: Line 228:
 
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials
 
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials
 
* http://www.music-ir.org/evaluation/tools.html
 
* http://www.music-ir.org/evaluation/tools.html
* http://140.114.76.148/jang/matlab/toolbox/
 
 
* http://htk.eng.cam.ac.uk/
 
* http://htk.eng.cam.ac.uk/
  
Line 234: Line 244:
  
 
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music
 
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music
 
== MATLAB Utility Scripts ==
 
* [http://ccrma.stanford.edu/~mw/ Mike's scripts]
 
 
* [[Reading MP3 Files]]
 
* [[Low-Pass Filter]]
 
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)
 
 
[[Category: Workshops]]
 
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/
 
 
[[MIR_workshop_2014]]
 
 
=== Bonus Lab Material from Previous Years (Matlab) ===
 
* Harmony Analysis Slides / Labs
 
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]
 
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]
 
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]
 
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]
 
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]
 
 
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm  SVM Lab]
 
 
* Overview of Weka & the Wekinator
 
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]
 
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]
 
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]
 
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]
 
 
* Downloads
 
**  [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]
 
**  [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]
 
**  [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]
 
 
* A brief history of MIR
 
** See also http://www.ismir.net/texts/Byrd02.html
 
* Notes
 
** CAL500 decoding
 
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done
 
* Extract CAL 500 per-song features to .mat or .csv using features from today.  This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)
 

Latest revision as of 14:33, 17 July 2015

Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval

News

Wednesday, July 15

8:48 am:

  • Today: Zafar Rafii, Jeff Scott, Aneesh Vartakavi, et al. of Gracenote will join us for lunch and for guest lectures in the afternoon.
  • If you checked out https://github.com/stevetjoa/stanford-mir onto your local machine, be sure to git checkout gh-pages before working.

Tuesday, July 14

9:31 am:

  • Don't forget %matplotlib inline at the top of your notebooks.

Monday, July 13

2:18 pm: dependencies:

  • apt-get install: git, python-dev, pip, python-scipy, python-matplotlib
  • Python packages: pip, boto, boto3, matplotlib, ipython, numpy, scipy, scikit-learn, librosa, mir_eval, seaborn, requests
  • (Anaconda)

11:11 am: Your post-it notes:

  • content-based analysis e.g. classifying violin playing style (vibrato, bowing)
  • MIR overview; music recommendation
  • feature extraction; dimensionality reduction; prediction
  • source separation techniques
  • chord estimation; "split" musical instruments; find beats in a song
  • audio-to-midi; signal/source/speaker separation; programming audio in Python (in general)
  • acoustic fingerprinting
  • machine learning; turn analysis -> synth; music characterization
  • beat tracking; ways of identifying timbre
  • mood recognition
  • instrument separation; real-time processing
  • Marsyas?
  • speed of retrieval
  • what's possible and what's not in music information retrieval; how to use MIR toolbox for fast realization of ideas
  • machine learning techniques for more general audio problems i.e. language detection or identifying sound sources
  • networking and getting to know you all

Attendees

Eric Raymond <lowlifi@gmail.com>, Stelios Andrew Stavroulakis, Richard Mendelsohn, Naithan Bosse, Alessio Bazzica, Karthik Yadati, Martha Larson, Stephen Hartzog, Philip Lee, Jaeyoung Choi, Matthew Gallagher, Yule Wu, Mark Renker, Rohit Ainapure, Eric Tarr <erictarr@gmail.com>, Allen Wu, Aaron Hipple

Logistics

Abstract

How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?

This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.

MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.

This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.

Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.

Workshop Structure: The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.

Schedule

Instructional material can be found at musicinformationretrieval.com (read only) or on GitHub (full source).

Day 1: Introduction to MIR, Signal Analysis, and Feature Extraction

Lecture

Introductions

  • CCRMA Introduction - (Nette, Fernando).
  • Introduction to MIR (What is MIR? Why MIR? Commercial applications)
  • Basic MIR system architecture
  • Timing and Segmentation: Frames, Onsets
  • Classification: Instance-based classifiers (k-NN)

Overview: Signal Analysis and Feature Extraction for MIR Applications

MFCCs sonified

  • Original track ("Chewing Gum") [1]
  • MFCCs only [2]


Lab

Understanding Audio Features Through Sonification

Day 2: Pitch and Chroma Analysis; Machine Learning, Clustering and Classification

Lecture

Classification: Unsupervised vs. Supervised, k-means, GMM, SVM

Pitch and Chroma

  • Features:
    • Monophonic Pitch Detection
    • Polyphonic Pitch Detection
    • Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma)
  • Analysis:
    • Dynamic Time Warping
    • Hidden Markov Models
    • Harmonic Analysis/Chord and Key Detection
  • Applications
    • Audio-Score Alignment
    • Cover Song Detection
    • Query-by-humming
    • Music Transcription

Lab

K-NN Instrument Classification

MFCC, K-Means Clustering

Bonus Slides: Temporal & Harmony Analysis


Day 3: Deep Belief Networks; Pitch Transcription

Introduction to Deep Learning Slides

[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]

Pitch Transcription Exercise

Guest lectures by Gracenote

Catch-up from yesterday


Day 4: Music Information Retrieval in Polyphonic Mixtures

Lecture

Music Transcription and Source Separation

  • Nonnegative Matrix Factorization
  • Sparse Coding

Evaluation Metrics for Information Retrieval

Lab

Lab 4 Description

References:


Day 5: Hashing for Music Search and Retrieval

Locality Sensitive Hashing (notebook)

Lunch at The Oasis

Software Libraries

Supplemental papers and information for the lectures...

Past CCRMA MIR Workshops and lectures

Additional References

Recommended books:

  • Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)
  • Netlab by Ian T. Nabney (includes software)
  • Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)
  • Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)
  • Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000

Background material:

Papers:

Other books:

  • Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop
  • Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.
  • Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.
  • "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.
  • Machine Learning, Tom Mitchell, McGraw Hill, 1997.

Interesting Links:

Audio Source Material

OLPC Sound Sample Archive (8.5 GB) [3]

http://www.tsi.telecom-paristech.fr/aao/en/category/database/

RWC Music Database (n DVDs) [available in Stanford Music library]

RWC - Sound Instruments Table of Contents

http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html

Univ or Iowa Music Instrument Samples

https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music