Difference between revisions of "MIR workshop 2014"

From CCRMA Wiki
Jump to: navigation, search
(Past CCRMA MIR Workshops and lectures)
m (Day 3: Machine Learning, Clustering and Classification)
(48 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
<b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b>
 
<b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b>
  
 
=NEW PAGE FOR 2014=
 
  
 
== Logistics ==
 
== Logistics ==
 
Workshop Title: '''Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval'''
 
Workshop Title: '''Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval'''
* Monday, June 24, through Friday, June 28, 2013. 9 AM to 5 PM every day.
+
* Monday, June 23, through Friday, June 27, 2014. 9:30 AM to 5 PM every day.
 
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx
 
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx
 
* Instructors:  
 
* Instructors:  
 
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.izotope.com iZotope, Inc.],  
 
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.izotope.com iZotope, Inc.],  
 
** [http://stevetjoa.com Steve Tjoa]
 
** [http://stevetjoa.com Steve Tjoa]
** Leigh Smith,  [http://www.izotope.com iZotope, Inc.]
+
** [http://www.leighsmith.com/Research Leigh Smith]
  
 
== Abstract ==  
 
== Abstract ==  
Line 32: Line 30:
 
'''Glossary of Terms to be used in this course <work in progress>'''
 
'''Glossary of Terms to be used in this course <work in progress>'''
  
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture1.pdf Lecture 1 Slides]
+
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture1.pdf Lecture 1 Slides]
  
 
* Introductions   
 
* Introductions   
* CCRMA Introduction - (Carr/Sasha).   
+
* CCRMA Introduction - (Nette, Carr, Fernando).   
 
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR)   
 
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR)   
 
* Overview of a basic MIR system architecture     
 
* Overview of a basic MIR system architecture     
 
* Timing and Segmentation: Frames, Onsets       
 
* Timing and Segmentation: Frames, Onsets       
* Features: ZCR, Spectral moments; Scaling of feature data 
 
 
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")
 
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")
 
* Classification: Instance-based classifiers (k-NN)   
 
* Classification: Instance-based classifiers (k-NN)   
Line 45: Line 42:
 
** Classifier evaluation (Cross-validation, training and test sets)  
 
** Classifier evaluation (Cross-validation, training and test sets)  
  
<br><u>Day 1: Part 2</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture2.pdf Lecture 2 Slides]
+
<br><u>Day 1: Part 2</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture2.pdf Lecture 2 Slides]
  
* Overview: Signal Analysis and Feature Extraction for MIR Applications (Historical: http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.1999.356)
+
* Overview: Signal Analysis and Feature Extraction for MIR Applications
* MIR Application Design
+
** Audio input, analysis
+
** Statistical/perceptual processing
+
** Data storage
+
** Post-processing
+
 
* Windowed Feature Extraction
 
* Windowed Feature Extraction
 
** I/O and analysis loops
 
** I/O and analysis loops
Line 62: Line 54:
 
* Frequency-domain features
 
* Frequency-domain features
 
** Spectrum, Spectral bins
 
** Spectrum, Spectral bins
** Spectral measures (statistical moments)
+
** Spectral measures (Spectral statistical moments)
 
** Pitch-estimation and tracking
 
** Pitch-estimation and tracking
 
** MFCCs
 
** MFCCs
 
* Spatial-domain features
 
* Spatial-domain features
 
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources
 
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources
* Other Feature domains
 
** Wavelets, LPC
 
  
<br><u>Lab 1:</u> <br>
+
MFCCs Sonified<br>
 +
Original track ("Chewing Gum"): [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694] <br>
 +
MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/] <br>
  
*Matlab Introduction.
+
 
** [http://ccrma.stanford.edu/workshops/mir2009/Lab0/lab0.html Fundamentals of Matlab]
+
<br><u>Lab 1:</u> <br>
  
 
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")  
 
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")  
 
   
 
   
* [http://ccrma.stanford.edu/workshops/mir2013/Lab%201%20-%20Basic%20feature%20extraction%20and%20classification%20%282013%29.htm HTML Lab 1 - Basic Feature Extraction and Classification] <br>
+
* [http://nbviewer.ipython.org/github/stevetjoa/stanford-mir/blob/master/Table_of_Contents.ipynb Lab 1 - Basic Feature Extraction and Classification] <br>
 
+
  
 
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code>
 
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code>
 
** To receive an up-to-date version of the repository, from your repository folder: <code>git pull</code>
 
** To receive an up-to-date version of the repository, from your repository folder: <code>git pull</code>
  
Students who need a personal tutorial of Matlab or audio signal processing will split off and received small group assistance to bring them up to speed.
 
 
* Background for students needing a refresher:
 
* Background for students needing a refresher:
 
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]
 
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1.html Fundamentals of Digital Audio Signal Processing (FFT, STFT, Windowing, Zero-padding, 2-D Time-frequency representation)]
 
  
 
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.
 
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.
Line 93: Line 82:
 
Presenters: Leigh Smith, Steve Tjoa
 
Presenters: Leigh Smith, Steve Tjoa
  
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture3.pdf Lecture 3 Slides]
+
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture3.pdf Lecture 3 Slides]
 
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]
 
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]
  
Line 130: Line 119:
 
** Query-by-humming  
 
** Query-by-humming  
 
** Music Transcription  
 
** Music Transcription  
 +
 +
CCRMA Tour
  
 
'''Lab 2:'''  
 
'''Lab 2:'''  
Line 144: Line 135:
  
 
=== Day 3: Machine Learning, Clustering and Classification ===
 
=== Day 3: Machine Learning, Clustering and Classification ===
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM - Steve [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_ML.pdf Lecture 5 Slides]
+
Demo: iZotope Discover (Sound Similarity Search, jay) [http://www.izotope.com/tech/cloud/mediamined.asp  Video]
 +
 
 +
Guest Lecture: Stephen Pope (SndsLike, BirdGenie)
 +
[https://ccrma.stanford.edu/workshops/mir2014/MAT_MIR4-update.pdf MAT_MIR4-update slides]
 +
[https://ccrma.stanford.edu/workshops/mir2014/BirdsEar.pdf BirdGenie Slides]
 +
[https://ccrma.stanford.edu/workshops/mir2014/SndsLike.pdf SndsLike Slides]
  
Demo: iZotope Discover (Sound Similarity Search) [http://www.izotope.com/tech/cloud/mediamined.asp  Video]  
+
Lecture 5: Classification: Unsupervised vs. Supervised, k-means, GMM, SVM - Steve [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_ML.pdf Lecture 5 Slides]
  
Guest Lecture 6: Ching-Wei Chen, Gracenote
 
  
 
'''Lab 3'''
 
'''Lab 3'''
Line 160: Line 155:
 
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===
 
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===
  
Lecture 7: Steve Tjoa, [http://ccrma.stanford.edu/workshops/mir2013/ccrma20130627.pdf Lecture 7 Slides]
+
Lecture 6: Steve Tjoa, [http://ccrma.stanford.edu/workshops/mir2013/ccrma20130627.pdf Lecture 6 Slides]
  
 
* Music Transcription and Source Separation
 
* Music Transcription and Source Separation
Line 166: Line 161:
 
* Sparse Coding
 
* Sparse Coding
  
Guest Lecture 8: Nick Bryan, Gautham Mysore
+
Guest Lecture 7: Andreas Ehmann, MIREX <br>
  
Nick & Gautham's latest publications: <br>
+
Lecture 8: Evaluation Metrics for Information Retrieval - Leigh Smith [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_IR.pdf Slides]
https://ccrma.stanford.edu/~gautham/Site/Publications.html<br>
+
https://ccrma.stanford.edu/~njb/
+
  
Nick Mini Course of Source Separation:  <br>
 
https://ccrma.stanford.edu/~njb/teaching/sstutorial/
 
 
Itakura-Saito Divergence: [http://www.researchgate.net/publication/23250940_Nonnegative_matrix_factorization_with_the_Itakura-Saito_divergence_with_application_to_music_analysis/file/32bfe50fb2aa75bd93.pdf PDF]
 
  
 
'''Lab 4'''
 
'''Lab 4'''
 
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]
 
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]
 
=== Day 5: Information Retrieval Metrics, Evaluation, Real World Considerations ===
 
 
Lecture 9: Leigh Smith, Evaluation Metrics for Information Retrieval [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_IR.pdf Slides]
 
 
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]
 
  
 
References:  
 
References:  
Line 191: Line 174:
 
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]
 
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]
  
=== Bonus Lab material ===
+
=== Day 5: Deep Belief Networks and Wavelets ===
* Harmony Analysis Slides / Labs
+
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]
+
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]
+
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]
+
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]
+
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]
+
  
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm  SVM Lab]
+
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]
  
* Overview of Weka & the Wekinator
+
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]
+
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]
+
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]
+
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]
+
  
* Downloads
+
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]
**  [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]
+
**  [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]
+
**  [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]
+
  
* A brief history of MIR
+
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]
** See also http://www.ismir.net/texts/Byrd02.html
+
 
* Notes
+
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9
** CAL500 decoding
+
 
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done
+
Afternoon: CCRMA Lawn BBQ
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)
+
  
 
== software, libraries, examples ==
 
== software, libraries, examples ==
Line 309: Line 278:
  
 
[[MIR_workshop_2014]]
 
[[MIR_workshop_2014]]
 +
 +
=== Bonus Lab Material from Previous Years (Matlab) ===
 +
* Harmony Analysis Slides / Labs
 +
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]
 +
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]
 +
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]
 +
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]
 +
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]
 +
 +
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm  SVM Lab]
 +
 +
* Overview of Weka & the Wekinator
 +
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]
 +
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]
 +
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]
 +
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]
 +
 +
* Downloads
 +
**  [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]
 +
**  [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]
 +
**  [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]
 +
 +
* A brief history of MIR
 +
** See also http://www.ismir.net/texts/Byrd02.html
 +
* Notes
 +
** CAL500 decoding
 +
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done
 +
* Extract CAL 500 per-song features to .mat or .csv using features from today.  This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)

Revision as of 13:49, 2 July 2014

Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval


Logistics

Workshop Title: Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval

Abstract

How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing? This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.

MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.

This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.

Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.

Workshop structure: The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.

Schedule: Lectures & Labs

Day 1: Introduction to MIR, Signal Analysis and Feature Extraction

Presenters: Jay LeBoeuf, Leigh Smith

Glossary of Terms to be used in this course <work in progress>


Day 1: Part 1 Lecture 1 Slides

  • Introductions
  • CCRMA Introduction - (Nette, Carr, Fernando).
  • Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR)
  • Overview of a basic MIR system architecture
  • Timing and Segmentation: Frames, Onsets
  • Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")
  • Classification: Instance-based classifiers (k-NN)
  • Information Retrieval Basics (Part 1)
    • Classifier evaluation (Cross-validation, training and test sets)


Day 1: Part 2 Lecture 2 Slides

  • Overview: Signal Analysis and Feature Extraction for MIR Applications
  • Windowed Feature Extraction
    • I/O and analysis loops
  • Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)
    • Kinds/Domains of Features
    • Application Requirements (labeling, segmentation, etc.)
  • Time-domain features (MPEG-7 Audio book ref)
    • RMS, Peak, LP/HP RMS, Dynamic range, ZCR
  • Frequency-domain features
    • Spectrum, Spectral bins
    • Spectral measures (Spectral statistical moments)
    • Pitch-estimation and tracking
    • MFCCs
  • Spatial-domain features
    • M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources

MFCCs Sonified
Original track ("Chewing Gum"): [1]
MFCCs only [2]



Lab 1:

  • Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")
  • From your home directory, simply type the following to obtain a copy of the repository: git clone https://github.com/stevetjoa/ccrma.git
    • To receive an up-to-date version of the repository, from your repository folder: git pull
  • REMINDER: Save all your work, because you may want to build on it in subsequent labs.

Day 2: Beat, Rhythm, Pitch and Chroma Analysis

Presenters: Leigh Smith, Steve Tjoa


Day 2: Part 1 Beat-finding and Rhythm Analysis Lecture 3 Slides A list of beat tracking references cited

Demo: MediaMined Discover (Rhythmic Similarity)

  • Onset-detection: Many Techniques
    • Time-domain differences
    • Spectral-domain differences
    • Perceptual data-warping
    • Adaptive onset detection
  • Beat-finding and Tempo Derivation
    • IOIs and Beat Regularity, Rubato
      • Tatum, Tactus and Meter levels
      • Tempo estimation
    • Onset-detection vs Beat-detection
      • The Onset Detection Function
    • Approaches to beat tracking & Meter estimation
      • Autocorrelation
      • Beat Spectrum measures
      • Multi-resolution (Wavelet)
    • Beat Histograms
    • Fluctuation Patterns
    • Joint estimation of downbeat and chord change


Day 2, Part 2: Pitch and Chroma Analysis Lecture 4 Slides

  • Features:
    • Monophonic Pitch Detection
    • Polyphonic Pitch Detection
    • Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma)
  • Analysis:
    • Dynamic Time Warping
    • Hidden Markov Models
    • Harmonic Analysis/Chord and Key Detection
  • Applications
    • Audio-Score Alignment
    • Cover Song Detection
    • Query-by-humming
    • Music Transcription

CCRMA Tour

Lab 2: Part 1: Tempo Extraction Part 2: Add in MFCCs to classification and test w Cross validation

Day 3: Machine Learning, Clustering and Classification

Demo: iZotope Discover (Sound Similarity Search, jay) Video

Guest Lecture: Stephen Pope (SndsLike, BirdGenie) MAT_MIR4-update slides BirdGenie Slides SndsLike Slides

Lecture 5: Classification: Unsupervised vs. Supervised, k-means, GMM, SVM - Steve Lecture 5 Slides


Lab 3 Topic: MFCC + k-Means, Clustering

Matlab code for key estimation, chord recognition:

Day 4: Music Information Retrieval in Polyphonic Mixtures

Lecture 6: Steve Tjoa, Lecture 6 Slides

  • Music Transcription and Source Separation
  • Nonnegative Matrix Factorization
  • Sparse Coding

Guest Lecture 7: Andreas Ehmann, MIREX

Lecture 8: Evaluation Metrics for Information Retrieval - Leigh Smith Slides


Lab 4

References:

Day 5: Deep Belief Networks and Wavelets

Lecture 10: Steve Tjoa, Introduction to Deep Learning Slides

Lecture 11: Leigh Smith, An Introduction to Wavelets Slides

[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]

Lunch at The Oasis

Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9

Afternoon: CCRMA Lawn BBQ

software, libraries, examples

Applications & Environments

Machine Learning Libraries & Toolboxes

Optional Toolboxes

Supplemental papers and information for the lectures...

Past CCRMA MIR Workshops and lectures

References for additional info

Recommended books:

  • Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)
  • Netlab by Ian T. Nabney (includes software)
  • Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)
  • Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)
  • Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000

Prerequisite / background material:

Papers:

Other books:

  • Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop
  • Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.
  • Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.
  • "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.
  • Machine Learning, Tom Mitchell, McGraw Hill, 1997.

Interesting Links:

Audio Source Material

OLPC Sound Sample Archive (8.5 GB) [3]

http://www.tsi.telecom-paristech.fr/aao/en/category/database/

RWC Music Database (n DVDs) [available in Stanford Music library]

RWC - Sound Instruments Table of Contents

http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html

Univ or Iowa Music Instrument Samples

https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music

MATLAB Utility Scripts

http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/

MIR_workshop_2014

Bonus Lab Material from Previous Years (Matlab)

for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done

  • Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)