https://ccrma.stanford.edu/mediawiki/api.php?action=feedcontributions&user=Kiemyang&feedformat=atomCCRMA Wiki - User contributions [en]2024-03-28T21:52:06ZUser contributionsMediaWiki 1.24.1https://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18256MIR workshop 20152015-07-17T21:33:30Z<p>Kiemyang: /* Attendees */</p>
<hr />
<div>''' Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval '''<br />
<br />
== News ==<br />
<br />
'''Wednesday, July 15'''<br />
<br />
8:48 am:<br />
<br />
* Today: Zafar Rafii, Jeff Scott, Aneesh Vartakavi, et al. of Gracenote will join us for lunch and for guest lectures in the afternoon.<br />
* If you checked out https://github.com/stevetjoa/stanford-mir onto your local machine, be sure to '''git checkout gh-pages''' before working.<br />
<br />
'''Tuesday, July 14'''<br />
<br />
9:31 am:<br />
<br />
* Don't forget '''%matplotlib inline''' at the top of your notebooks.<br />
<br />
'''Monday, July 13'''<br />
<br />
2:18 pm: dependencies:<br />
<br />
* apt-get install: git, python-dev, pip, python-scipy, python-matplotlib<br />
* Python packages: pip, boto, boto3, matplotlib, ipython, numpy, scipy, scikit-learn, librosa, mir_eval, seaborn, requests<br />
* (Anaconda)<br />
<br />
11:11 am: Your post-it notes:<br />
<br />
* content-based analysis e.g. classifying violin playing style (vibrato, bowing)<br />
* MIR overview; music recommendation<br />
* feature extraction; dimensionality reduction; prediction<br />
* source separation techniques<br />
* chord estimation; "split" musical instruments; find beats in a song<br />
* audio-to-midi; signal/source/speaker separation; programming audio in Python (in general)<br />
* acoustic fingerprinting<br />
* machine learning; turn analysis -> synth; music characterization<br />
* beat tracking; ways of identifying timbre<br />
* mood recognition<br />
* instrument separation; real-time processing<br />
* Marsyas?<br />
* speed of retrieval<br />
* what's possible and what's not in music information retrieval; how to use MIR toolbox for fast realization of ideas<br />
* machine learning techniques for more general audio problems i.e. language detection or identifying sound sources<br />
* networking and getting to know you all<br />
<br />
== Attendees ==<br />
<br />
Eric Raymond <lowlifi@gmail.com>, Stelios Andrew Stavroulakis, Richard Mendelsohn, Naithan Bosse, Alessio Bazzica, Karthik Yadati, Martha Larson, Stephen Hartzog, Philip Lee, Jaeyoung Choi, Matthew Gallagher, Yule Wu, Mark Renker, Rohit Ainapure, Eric Tarr <erictarr@gmail.com>, Allen Wu, Aaron Hipple<br />
<br />
== Logistics ==<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop Structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule ==<br />
<br />
Instructional material can be found at [http://musicinformationretrieval.com musicinformationretrieval.com] (read only) or on [https://github.com/stevetjoa/stanford-mir GitHub] (full source).<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis, and Feature Extraction ===<br />
<br />
'''Lecture'''<br />
<br />
Introductions<br />
* CCRMA Introduction - (Nette, Fernando). <br />
* Introduction to MIR (What is MIR? Why MIR? Commercial applications) <br />
* Basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Classification: Instance-based classifiers (k-NN) <br />
<br />
Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
* Time-domain features<br />
* Frequency-domain features<br />
<br />
MFCCs sonified<br />
* Original track ("Chewing Gum") [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694]<br />
* MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/]<br />
<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/feature_sonification.html Understanding Audio Features Through Sonification]<br />
<br />
* Background for students needing a refresher: [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
* ''Reminder'': Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Pitch and Chroma Analysis; Machine Learning, Clustering and Classification ===<br />
<br />
'''Lecture'''<br />
<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM<br />
<br />
Pitch and Chroma<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/knn_instrument_classification.html K-NN Instrument Classification]<br />
<br />
[http://musicinformationretrieval.com/kmeans_instrument_classification.html MFCC, K-Means Clustering]<br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means (2012)]<br />
<br />
Bonus Slides: Temporal & Harmony Analysis <br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
<br />
=== Day 3: Deep Belief Networks; Pitch Transcription ===<br />
<br />
Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Pitch Transcription Exercise<br />
<br />
Guest lectures by Gracenote<br />
<br />
Catch-up from yesterday<br />
<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
'''Lecture'''<br />
<br />
Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Evaluation Metrics for Information Retrieval<br />
<br />
'''Lab'''<br />
<br />
[https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
<br />
=== Day 5: Hashing for Music Search and Retrieval ===<br />
<br />
Locality Sensitive Hashing ([http://musicinformationretrieval.com/lsh_fingerprinting.html notebook])<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
== Software Libraries ==<br />
<br />
* [https://www.python.org/ Python]<br />
* [http://www.numpy.org/ NumPy]<br />
* [http://www.scipy.org/ SciPy]<br />
* [http://ipython.org/ IPython]<br />
* [http://scikit-learn.org/stable/ scikit-learn]<br />
* [http://bmcfee.github.io/librosa/ librosa]<br />
* [http://craffel.github.io/mir_eval/ mir_eval]<br />
* [http://essentia.upf.edu/ Essentia]<br />
* [http://www.vamp-plugins.org/vampy.html VamPy]<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== Additional References == <br />
<br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18255MIR workshop 20152015-07-17T16:36:11Z<p>Kiemyang: </p>
<hr />
<div>''' Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval '''<br />
<br />
== News ==<br />
<br />
'''Wednesday, July 15'''<br />
<br />
8:48 am:<br />
<br />
* Today: Zafar Rafii, Jeff Scott, Aneesh Vartakavi, et al. of Gracenote will join us for lunch and for guest lectures in the afternoon.<br />
* If you checked out https://github.com/stevetjoa/stanford-mir onto your local machine, be sure to '''git checkout gh-pages''' before working.<br />
<br />
'''Tuesday, July 14'''<br />
<br />
9:31 am:<br />
<br />
* Don't forget '''%matplotlib inline''' at the top of your notebooks.<br />
<br />
'''Monday, July 13'''<br />
<br />
2:18 pm: dependencies:<br />
<br />
* apt-get install: git, python-dev, pip, python-scipy, python-matplotlib<br />
* Python packages: pip, boto, boto3, matplotlib, ipython, numpy, scipy, scikit-learn, librosa, mir_eval, seaborn, requests<br />
* (Anaconda)<br />
<br />
11:11 am: Your post-it notes:<br />
<br />
* content-based analysis e.g. classifying violin playing style (vibrato, bowing)<br />
* MIR overview; music recommendation<br />
* feature extraction; dimensionality reduction; prediction<br />
* source separation techniques<br />
* chord estimation; "split" musical instruments; find beats in a song<br />
* audio-to-midi; signal/source/speaker separation; programming audio in Python (in general)<br />
* acoustic fingerprinting<br />
* machine learning; turn analysis -> synth; music characterization<br />
* beat tracking; ways of identifying timbre<br />
* mood recognition<br />
* instrument separation; real-time processing<br />
* Marsyas?<br />
* speed of retrieval<br />
* what's possible and what's not in music information retrieval; how to use MIR toolbox for fast realization of ideas<br />
* machine learning techniques for more general audio problems i.e. language detection or identifying sound sources<br />
* networking and getting to know you all<br />
<br />
== Attendees ==<br />
<br />
Eric Raymond <lowlifi@gmail.com>, Stelios Andrew Stavroulakis, Richard Mendelsohn, Naithan Bosse, Alessio Bazzica, Karthik Yadati, Martha Larson, Stephen Hartzog, Philip Lee, Jaeyoung Choi, Matthew Gallagher, Yule Wu, Mark Renker, Rohit Ainapure, Eric Tarr, Allen Wu, Aaron Hipple<br />
<br />
== Logistics ==<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop Structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule ==<br />
<br />
Instructional material can be found at [http://musicinformationretrieval.com musicinformationretrieval.com] (read only) or on [https://github.com/stevetjoa/stanford-mir GitHub] (full source).<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis, and Feature Extraction ===<br />
<br />
'''Lecture'''<br />
<br />
Introductions<br />
* CCRMA Introduction - (Nette, Fernando). <br />
* Introduction to MIR (What is MIR? Why MIR? Commercial applications) <br />
* Basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Classification: Instance-based classifiers (k-NN) <br />
<br />
Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
* Time-domain features<br />
* Frequency-domain features<br />
<br />
MFCCs sonified<br />
* Original track ("Chewing Gum") [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694]<br />
* MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/]<br />
<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/feature_sonification.html Understanding Audio Features Through Sonification]<br />
<br />
* Background for students needing a refresher: [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
* ''Reminder'': Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Pitch and Chroma Analysis; Machine Learning, Clustering and Classification ===<br />
<br />
'''Lecture'''<br />
<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM<br />
<br />
Pitch and Chroma<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/knn_instrument_classification.html K-NN Instrument Classification]<br />
<br />
[http://musicinformationretrieval.com/kmeans_instrument_classification.html MFCC, K-Means Clustering]<br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means (2012)]<br />
<br />
Bonus Slides: Temporal & Harmony Analysis <br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
<br />
=== Day 3: Deep Belief Networks; Pitch Transcription ===<br />
<br />
Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Pitch Transcription Exercise<br />
<br />
Guest lectures by Gracenote<br />
<br />
Catch-up from yesterday<br />
<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
'''Lecture'''<br />
<br />
Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Evaluation Metrics for Information Retrieval<br />
<br />
'''Lab'''<br />
<br />
[https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
<br />
=== Day 5: Hashing for Music Search and Retrieval ===<br />
<br />
Locality Sensitive Hashing ([http://musicinformationretrieval.com/lsh_fingerprinting.html notebook])<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
== Software Libraries ==<br />
<br />
* [https://www.python.org/ Python]<br />
* [http://www.numpy.org/ NumPy]<br />
* [http://www.scipy.org/ SciPy]<br />
* [http://ipython.org/ IPython]<br />
* [http://scikit-learn.org/stable/ scikit-learn]<br />
* [http://bmcfee.github.io/librosa/ librosa]<br />
* [http://craffel.github.io/mir_eval/ mir_eval]<br />
* [http://essentia.upf.edu/ Essentia]<br />
* [http://www.vamp-plugins.org/vampy.html VamPy]<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== Additional References == <br />
<br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18254MIR workshop 20152015-07-17T16:34:27Z<p>Kiemyang: /* Day 5: Beat, Rhythm, */</p>
<hr />
<div>''' Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval '''<br />
<br />
== News ==<br />
<br />
'''Wednesday, July 15'''<br />
<br />
8:48 am:<br />
<br />
* Today: Zafar Rafii, Jeff Scott, Aneesh Vartakavi, et al. of Gracenote will join us for lunch and for guest lectures in the afternoon.<br />
* If you checked out https://github.com/stevetjoa/stanford-mir onto your local machine, be sure to '''git checkout gh-pages''' before working.<br />
<br />
'''Tuesday, July 14'''<br />
<br />
9:31 am:<br />
<br />
* Don't forget '''%matplotlib inline''' at the top of your notebooks.<br />
<br />
'''Monday, July 13'''<br />
<br />
2:18 pm: dependencies:<br />
<br />
* apt-get install: git, python-dev, pip, python-scipy, python-matplotlib<br />
* Python packages: pip, boto, boto3, matplotlib, ipython, numpy, scipy, scikit-learn, librosa, mir_eval, seaborn, requests<br />
* (Anaconda)<br />
<br />
11:11 am: Your post-it notes:<br />
<br />
* content-based analysis e.g. classifying violin playing style (vibrato, bowing)<br />
* MIR overview; music recommendation<br />
* feature extraction; dimensionality reduction; prediction<br />
* source separation techniques<br />
* chord estimation; "split" musical instruments; find beats in a song<br />
* audio-to-midi; signal/source/speaker separation; programming audio in Python (in general)<br />
* acoustic fingerprinting<br />
* machine learning; turn analysis -> synth; music characterization<br />
* beat tracking; ways of identifying timbre<br />
* mood recognition<br />
* instrument separation; real-time processing<br />
* Marsyas?<br />
* speed of retrieval<br />
* what's possible and what's not in music information retrieval; how to use MIR toolbox for fast realization of ideas<br />
* machine learning techniques for more general audio problems i.e. language detection or identifying sound sources<br />
* networking and getting to know you all<br />
<br />
== Logistics ==<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop Structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule ==<br />
<br />
Instructional material can be found at [http://musicinformationretrieval.com musicinformationretrieval.com] (read only) or on [https://github.com/stevetjoa/stanford-mir GitHub] (full source).<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis, and Feature Extraction ===<br />
<br />
'''Lecture'''<br />
<br />
Introductions<br />
* CCRMA Introduction - (Nette, Fernando). <br />
* Introduction to MIR (What is MIR? Why MIR? Commercial applications) <br />
* Basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Classification: Instance-based classifiers (k-NN) <br />
<br />
Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
* Time-domain features<br />
* Frequency-domain features<br />
<br />
MFCCs sonified<br />
* Original track ("Chewing Gum") [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694]<br />
* MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/]<br />
<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/feature_sonification.html Understanding Audio Features Through Sonification]<br />
<br />
* Background for students needing a refresher: [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
* ''Reminder'': Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Pitch and Chroma Analysis; Machine Learning, Clustering and Classification ===<br />
<br />
'''Lecture'''<br />
<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM<br />
<br />
Pitch and Chroma<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/knn_instrument_classification.html K-NN Instrument Classification]<br />
<br />
[http://musicinformationretrieval.com/kmeans_instrument_classification.html MFCC, K-Means Clustering]<br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means (2012)]<br />
<br />
Bonus Slides: Temporal & Harmony Analysis <br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
<br />
=== Day 3: Deep Belief Networks; Pitch Transcription ===<br />
<br />
Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Pitch Transcription Exercise<br />
<br />
Guest lectures by Gracenote<br />
<br />
Catch-up from yesterday<br />
<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
'''Lecture'''<br />
<br />
Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Evaluation Metrics for Information Retrieval<br />
<br />
'''Lab'''<br />
<br />
[https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
<br />
=== Day 5: Hashing for Music Search and Retrieval ===<br />
<br />
Locality Sensitive Hashing ([http://musicinformationretrieval.com/lsh_fingerprinting.html notebook])<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
== Software Libraries ==<br />
<br />
* [https://www.python.org/ Python]<br />
* [http://www.numpy.org/ NumPy]<br />
* [http://www.scipy.org/ SciPy]<br />
* [http://ipython.org/ IPython]<br />
* [http://scikit-learn.org/stable/ scikit-learn]<br />
* [http://bmcfee.github.io/librosa/ librosa]<br />
* [http://craffel.github.io/mir_eval/ mir_eval]<br />
* [http://essentia.upf.edu/ Essentia]<br />
* [http://www.vamp-plugins.org/vampy.html VamPy]<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== Additional References == <br />
<br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18250MIR workshop 20152015-07-15T16:03:24Z<p>Kiemyang: /* News */</p>
<hr />
<div>''' Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval '''<br />
<br />
== News ==<br />
<br />
'''Wednesday, July 15'''<br />
<br />
8:48 am:<br />
<br />
* Today: Zafar Rafii, Jeff Scott, Aneesh Vartakavi, et al. of Gracenote will join us for lunch and for guest lectures in the afternoon.<br />
* If you checked out https://github.com/stevetjoa/stanford-mir onto your local machine, be sure to '''git checkout gh-pages''' before working.<br />
<br />
'''Tuesday, July 14'''<br />
<br />
9:31 am:<br />
<br />
* Don't forget '''%matplotlib inline''' at the top of your notebooks.<br />
<br />
'''Monday, July 13'''<br />
<br />
2:18 pm: dependencies:<br />
<br />
* apt-get install: git, python-dev, pip, python-scipy, python-matplotlib<br />
* Python packages: pip, boto, boto3, matplotlib, ipython, numpy, scipy, scikit-learn, librosa, mir_eval, seaborn, requests<br />
* (Anaconda)<br />
<br />
11:11 am: Your post-it notes:<br />
<br />
* content-based analysis e.g. classifying violin playing style (vibrato, bowing)<br />
* MIR overview; music recommendation<br />
* feature extraction; dimensionality reduction; prediction<br />
* source separation techniques<br />
* chord estimation; "split" musical instruments; find beats in a song<br />
* audio-to-midi; signal/source/speaker separation; programming audio in Python (in general)<br />
* acoustic fingerprinting<br />
* machine learning; turn analysis -> synth; music characterization<br />
* beat tracking; ways of identifying timbre<br />
* mood recognition<br />
* instrument separation; real-time processing<br />
* Marsyas?<br />
* speed of retrieval<br />
* what's possible and what's not in music information retrieval; how to use MIR toolbox for fast realization of ideas<br />
* machine learning techniques for more general audio problems i.e. language detection or identifying sound sources<br />
* networking and getting to know you all<br />
<br />
== Logistics ==<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop Structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule ==<br />
<br />
Instructional material can be found at [http://musicinformationretrieval.com musicinformationretrieval.com] (read only) or on [https://github.com/stevetjoa/stanford-mir GitHub] (full source).<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis, and Feature Extraction ===<br />
<br />
'''Lecture'''<br />
<br />
Introductions<br />
* CCRMA Introduction - (Nette, Fernando). <br />
* Introduction to MIR (What is MIR? Why MIR? Commercial applications) <br />
* Basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Classification: Instance-based classifiers (k-NN) <br />
<br />
Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
* Time-domain features<br />
* Frequency-domain features<br />
<br />
MFCCs sonified<br />
* Original track ("Chewing Gum") [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694]<br />
* MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/]<br />
<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/feature_sonification.html Understanding Audio Features Through Sonification]<br />
<br />
* Background for students needing a refresher: [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
* ''Reminder'': Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Pitch and Chroma Analysis; Machine Learning, Clustering and Classification ===<br />
<br />
'''Lecture'''<br />
<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM<br />
<br />
Pitch and Chroma<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/knn_instrument_classification.html K-NN Instrument Classification]<br />
<br />
[http://musicinformationretrieval.com/kmeans_instrument_classification.html MFCC, K-Means Clustering]<br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means (2012)]<br />
<br />
Bonus Slides: Temporal & Harmony Analysis <br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
<br />
=== Day 3: Deep Belief Networks; Pitch Transcription ===<br />
<br />
Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Pitch Transcription Exercise<br />
<br />
Guest lectures by Gracenote<br />
<br />
Catch-up from yesterday<br />
<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
'''Lecture'''<br />
<br />
Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Evaluation Metrics for Information Retrieval<br />
<br />
'''Lab'''<br />
<br />
[https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
<br />
=== Day 5: Beat, Rhythm, ===<br />
<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf List of beat tracking references]<br />
<br />
Onset Detection<br />
* Time-domain differences<br />
* Spectral-domain differences<br />
* Perceptual data-warping<br />
* Adaptive onset detection<br />
<br />
Beat and Tempo<br />
* IOIs and Beat Regularity, Rubato<br />
* Tatum, Tactus and Meter levels<br />
* Tempo estimation<br />
* Onset-detection vs Beat-detection<br />
* The Onset Detection Function<br />
* Beat Histograms<br />
* Fluctuation Patterns<br />
* Joint estimation of downbeat and chord change<br />
<br />
Approaches to Beat Tracking and Meter Estimation<br />
* Autocorrelation<br />
* Beat Spectrum measures<br />
* Multi-resolution (Wavelet)<br />
<br />
An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
== Software Libraries ==<br />
<br />
* [https://www.python.org/ Python]<br />
* [http://www.numpy.org/ NumPy]<br />
* [http://www.scipy.org/ SciPy]<br />
* [http://ipython.org/ IPython]<br />
* [http://scikit-learn.org/stable/ scikit-learn]<br />
* [http://bmcfee.github.io/librosa/ librosa]<br />
* [http://craffel.github.io/mir_eval/ mir_eval]<br />
* [http://essentia.upf.edu/ Essentia]<br />
* [http://www.vamp-plugins.org/vampy.html VamPy]<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== Additional References == <br />
<br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18249MIR workshop 20152015-07-15T16:03:05Z<p>Kiemyang: /* News */</p>
<hr />
<div>''' Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval '''<br />
<br />
== News ==<br />
<br />
'''Wednesday, July 15'''<br />
<br />
8:48 am:<br />
<br />
* Today: Zafar Rafii, Jeff Scott, Aneesh Vartakavi, et al. of Gracenote will join us for lunch and for guest lectures in the afternoon.<br />
* If you checked out https://github.com/stevetjoa/stanford-mir onto your local machine, be sure to '''git checkout gh-pages''' before working.<br />
<br />
'''Tuesday, July 14'''<br />
<br />
9:31 am:<br />
<br />
* Don't forget "%matplotlib inline" at the top of your notebooks.<br />
<br />
'''Monday, July 13'''<br />
<br />
2:18 pm: dependencies:<br />
<br />
* apt-get install: git, python-dev, pip, python-scipy, python-matplotlib<br />
* Python packages: pip, boto, boto3, matplotlib, ipython, numpy, scipy, scikit-learn, librosa, mir_eval, seaborn, requests<br />
* (Anaconda)<br />
<br />
11:11 am: Your post-it notes:<br />
<br />
* content-based analysis e.g. classifying violin playing style (vibrato, bowing)<br />
* MIR overview; music recommendation<br />
* feature extraction; dimensionality reduction; prediction<br />
* source separation techniques<br />
* chord estimation; "split" musical instruments; find beats in a song<br />
* audio-to-midi; signal/source/speaker separation; programming audio in Python (in general)<br />
* acoustic fingerprinting<br />
* machine learning; turn analysis -> synth; music characterization<br />
* beat tracking; ways of identifying timbre<br />
* mood recognition<br />
* instrument separation; real-time processing<br />
* Marsyas?<br />
* speed of retrieval<br />
* what's possible and what's not in music information retrieval; how to use MIR toolbox for fast realization of ideas<br />
* machine learning techniques for more general audio problems i.e. language detection or identifying sound sources<br />
* networking and getting to know you all<br />
<br />
== Logistics ==<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop Structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule ==<br />
<br />
Instructional material can be found at [http://musicinformationretrieval.com musicinformationretrieval.com] (read only) or on [https://github.com/stevetjoa/stanford-mir GitHub] (full source).<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis, and Feature Extraction ===<br />
<br />
'''Lecture'''<br />
<br />
Introductions<br />
* CCRMA Introduction - (Nette, Fernando). <br />
* Introduction to MIR (What is MIR? Why MIR? Commercial applications) <br />
* Basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Classification: Instance-based classifiers (k-NN) <br />
<br />
Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
* Time-domain features<br />
* Frequency-domain features<br />
<br />
MFCCs sonified<br />
* Original track ("Chewing Gum") [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694]<br />
* MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/]<br />
<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/feature_sonification.html Understanding Audio Features Through Sonification]<br />
<br />
* Background for students needing a refresher: [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
* ''Reminder'': Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Pitch and Chroma Analysis; Machine Learning, Clustering and Classification ===<br />
<br />
'''Lecture'''<br />
<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM<br />
<br />
Pitch and Chroma<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/knn_instrument_classification.html K-NN Instrument Classification]<br />
<br />
[http://musicinformationretrieval.com/kmeans_instrument_classification.html MFCC, K-Means Clustering]<br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means (2012)]<br />
<br />
Bonus Slides: Temporal & Harmony Analysis <br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
<br />
=== Day 3: Deep Belief Networks; Pitch Transcription ===<br />
<br />
Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Pitch Transcription Exercise<br />
<br />
Guest lectures by Gracenote<br />
<br />
Catch-up from yesterday<br />
<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
'''Lecture'''<br />
<br />
Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Evaluation Metrics for Information Retrieval<br />
<br />
'''Lab'''<br />
<br />
[https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
<br />
=== Day 5: Beat, Rhythm, ===<br />
<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf List of beat tracking references]<br />
<br />
Onset Detection<br />
* Time-domain differences<br />
* Spectral-domain differences<br />
* Perceptual data-warping<br />
* Adaptive onset detection<br />
<br />
Beat and Tempo<br />
* IOIs and Beat Regularity, Rubato<br />
* Tatum, Tactus and Meter levels<br />
* Tempo estimation<br />
* Onset-detection vs Beat-detection<br />
* The Onset Detection Function<br />
* Beat Histograms<br />
* Fluctuation Patterns<br />
* Joint estimation of downbeat and chord change<br />
<br />
Approaches to Beat Tracking and Meter Estimation<br />
* Autocorrelation<br />
* Beat Spectrum measures<br />
* Multi-resolution (Wavelet)<br />
<br />
An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
== Software Libraries ==<br />
<br />
* [https://www.python.org/ Python]<br />
* [http://www.numpy.org/ NumPy]<br />
* [http://www.scipy.org/ SciPy]<br />
* [http://ipython.org/ IPython]<br />
* [http://scikit-learn.org/stable/ scikit-learn]<br />
* [http://bmcfee.github.io/librosa/ librosa]<br />
* [http://craffel.github.io/mir_eval/ mir_eval]<br />
* [http://essentia.upf.edu/ Essentia]<br />
* [http://www.vamp-plugins.org/vampy.html VamPy]<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== Additional References == <br />
<br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18248MIR workshop 20152015-07-15T15:57:45Z<p>Kiemyang: /* Schedule */</p>
<hr />
<div>''' Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval '''<br />
<br />
== News ==<br />
<br />
'''Wednesday, July 15'''<br />
<br />
8:48 am:<br />
<br />
* Today: Zafar Rafii, Jeff Scott, Aneesh Vartakavi, et al. of Gracenote will join us for lunch and for guest lectures in the afternoon.<br />
<br />
'''Tuesday, July 14'''<br />
<br />
9:31 am:<br />
<br />
* Don't forget "%matplotlib inline" at the top of your notebooks.<br />
<br />
'''Monday, July 13'''<br />
<br />
2:18 pm: dependencies:<br />
<br />
* apt-get install: git, python-dev, pip, python-scipy, python-matplotlib<br />
* Python packages: pip, boto, boto3, matplotlib, ipython, numpy, scipy, scikit-learn, librosa, mir_eval, seaborn, requests<br />
* (Anaconda)<br />
<br />
11:11 am: Your post-it notes:<br />
<br />
* content-based analysis e.g. classifying violin playing style (vibrato, bowing)<br />
* MIR overview; music recommendation<br />
* feature extraction; dimensionality reduction; prediction<br />
* source separation techniques<br />
* chord estimation; "split" musical instruments; find beats in a song<br />
* audio-to-midi; signal/source/speaker separation; programming audio in Python (in general)<br />
* acoustic fingerprinting<br />
* machine learning; turn analysis -> synth; music characterization<br />
* beat tracking; ways of identifying timbre<br />
* mood recognition<br />
* instrument separation; real-time processing<br />
* Marsyas?<br />
* speed of retrieval<br />
* what's possible and what's not in music information retrieval; how to use MIR toolbox for fast realization of ideas<br />
* machine learning techniques for more general audio problems i.e. language detection or identifying sound sources<br />
* networking and getting to know you all<br />
<br />
== Logistics ==<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop Structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule ==<br />
<br />
Instructional material can be found at [http://musicinformationretrieval.com musicinformationretrieval.com] (read only) or on [https://github.com/stevetjoa/stanford-mir GitHub] (full source).<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis, and Feature Extraction ===<br />
<br />
'''Lecture'''<br />
<br />
Introductions<br />
* CCRMA Introduction - (Nette, Fernando). <br />
* Introduction to MIR (What is MIR? Why MIR? Commercial applications) <br />
* Basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Classification: Instance-based classifiers (k-NN) <br />
<br />
Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
* Time-domain features<br />
* Frequency-domain features<br />
<br />
MFCCs sonified<br />
* Original track ("Chewing Gum") [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694]<br />
* MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/]<br />
<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/feature_sonification.html Understanding Audio Features Through Sonification]<br />
<br />
* Background for students needing a refresher: [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
* ''Reminder'': Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Pitch and Chroma Analysis; Machine Learning, Clustering and Classification ===<br />
<br />
'''Lecture'''<br />
<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM<br />
<br />
Pitch and Chroma<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/knn_instrument_classification.html K-NN Instrument Classification]<br />
<br />
[http://musicinformationretrieval.com/kmeans_instrument_classification.html MFCC, K-Means Clustering]<br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means (2012)]<br />
<br />
Bonus Slides: Temporal & Harmony Analysis <br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
<br />
=== Day 3: Deep Belief Networks; Pitch Transcription ===<br />
<br />
Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Pitch Transcription Exercise<br />
<br />
Guest lectures by Gracenote<br />
<br />
Catch-up from yesterday<br />
<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
'''Lecture'''<br />
<br />
Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Evaluation Metrics for Information Retrieval<br />
<br />
'''Lab'''<br />
<br />
[https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
<br />
=== Day 5: Beat, Rhythm, ===<br />
<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf List of beat tracking references]<br />
<br />
Onset Detection<br />
* Time-domain differences<br />
* Spectral-domain differences<br />
* Perceptual data-warping<br />
* Adaptive onset detection<br />
<br />
Beat and Tempo<br />
* IOIs and Beat Regularity, Rubato<br />
* Tatum, Tactus and Meter levels<br />
* Tempo estimation<br />
* Onset-detection vs Beat-detection<br />
* The Onset Detection Function<br />
* Beat Histograms<br />
* Fluctuation Patterns<br />
* Joint estimation of downbeat and chord change<br />
<br />
Approaches to Beat Tracking and Meter Estimation<br />
* Autocorrelation<br />
* Beat Spectrum measures<br />
* Multi-resolution (Wavelet)<br />
<br />
An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
== Software Libraries ==<br />
<br />
* [https://www.python.org/ Python]<br />
* [http://www.numpy.org/ NumPy]<br />
* [http://www.scipy.org/ SciPy]<br />
* [http://ipython.org/ IPython]<br />
* [http://scikit-learn.org/stable/ scikit-learn]<br />
* [http://bmcfee.github.io/librosa/ librosa]<br />
* [http://craffel.github.io/mir_eval/ mir_eval]<br />
* [http://essentia.upf.edu/ Essentia]<br />
* [http://www.vamp-plugins.org/vampy.html VamPy]<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== Additional References == <br />
<br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18247MIR workshop 20152015-07-15T15:50:21Z<p>Kiemyang: /* News */</p>
<hr />
<div>''' Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval '''<br />
<br />
== News ==<br />
<br />
'''Wednesday, July 15'''<br />
<br />
8:48 am:<br />
<br />
* Today: Zafar Rafii, Jeff Scott, Aneesh Vartakavi, et al. of Gracenote will join us for lunch and for guest lectures in the afternoon.<br />
<br />
'''Tuesday, July 14'''<br />
<br />
9:31 am:<br />
<br />
* Don't forget "%matplotlib inline" at the top of your notebooks.<br />
<br />
'''Monday, July 13'''<br />
<br />
2:18 pm: dependencies:<br />
<br />
* apt-get install: git, python-dev, pip, python-scipy, python-matplotlib<br />
* Python packages: pip, boto, boto3, matplotlib, ipython, numpy, scipy, scikit-learn, librosa, mir_eval, seaborn, requests<br />
* (Anaconda)<br />
<br />
11:11 am: Your post-it notes:<br />
<br />
* content-based analysis e.g. classifying violin playing style (vibrato, bowing)<br />
* MIR overview; music recommendation<br />
* feature extraction; dimensionality reduction; prediction<br />
* source separation techniques<br />
* chord estimation; "split" musical instruments; find beats in a song<br />
* audio-to-midi; signal/source/speaker separation; programming audio in Python (in general)<br />
* acoustic fingerprinting<br />
* machine learning; turn analysis -> synth; music characterization<br />
* beat tracking; ways of identifying timbre<br />
* mood recognition<br />
* instrument separation; real-time processing<br />
* Marsyas?<br />
* speed of retrieval<br />
* what's possible and what's not in music information retrieval; how to use MIR toolbox for fast realization of ideas<br />
* machine learning techniques for more general audio problems i.e. language detection or identifying sound sources<br />
* networking and getting to know you all<br />
<br />
== Logistics ==<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop Structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule ==<br />
<br />
Instructional material can be found at [http://musicinformationretrieval.com musicinformationretrieval.com] (read only) or on [https://github.com/stevetjoa/stanford-mir GitHub] (full source).<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis, and Feature Extraction ===<br />
<br />
'''Lecture'''<br />
<br />
Introductions<br />
* CCRMA Introduction - (Nette, Fernando). <br />
* Introduction to MIR (What is MIR? Why MIR? Commercial applications) <br />
* Basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Classification: Instance-based classifiers (k-NN) <br />
<br />
Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
* Time-domain features<br />
* Frequency-domain features<br />
<br />
MFCCs sonified<br />
* Original track ("Chewing Gum") [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694]<br />
* MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/]<br />
<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/feature_sonification.html Understanding Audio Features Through Sonification]<br />
<br />
* Background for students needing a refresher: [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
* ''Reminder'': Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Machine Learning, Clustering and Classification ===<br />
<br />
'''Lecture'''<br />
<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/kmeans_instrument_classification.html MFCC, K-Means Clustering]<br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means (2012)]<br />
<br />
<br />
<br />
=== Day 3: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf List of beat tracking references]<br />
<br />
Onset Detection<br />
* Time-domain differences<br />
* Spectral-domain differences<br />
* Perceptual data-warping<br />
* Adaptive onset detection<br />
<br />
Beat and Tempo<br />
* IOIs and Beat Regularity, Rubato<br />
* Tatum, Tactus and Meter levels<br />
* Tempo estimation<br />
* Onset-detection vs Beat-detection<br />
* The Onset Detection Function<br />
* Beat Histograms<br />
* Fluctuation Patterns<br />
* Joint estimation of downbeat and chord change<br />
<br />
Approaches to Beat Tracking and Meter Estimation<br />
* Autocorrelation<br />
* Beat Spectrum measures<br />
* Multi-resolution (Wavelet)<br />
<br />
Pitch and Chroma<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab''' <br />
<br />
Part 1: Tempo Extraction<br />
<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
<br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
<br />
Bonus Slides: Temporal & Harmony Analysis <br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
'''Lecture'''<br />
<br />
Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Evaluation Metrics for Information Retrieval<br />
<br />
'''Lab'''<br />
<br />
[https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Day 5: Deep Belief Networks and Wavelets ===<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
== Software Libraries ==<br />
<br />
* [https://www.python.org/ Python]<br />
* [http://www.numpy.org/ NumPy]<br />
* [http://www.scipy.org/ SciPy]<br />
* [http://ipython.org/ IPython]<br />
* [http://scikit-learn.org/stable/ scikit-learn]<br />
* [http://bmcfee.github.io/librosa/ librosa]<br />
* [http://craffel.github.io/mir_eval/ mir_eval]<br />
* [http://essentia.upf.edu/ Essentia]<br />
* [http://www.vamp-plugins.org/vampy.html VamPy]<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== Additional References == <br />
<br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18246MIR workshop 20152015-07-14T16:32:17Z<p>Kiemyang: /* News */</p>
<hr />
<div>''' Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval '''<br />
<br />
== News ==<br />
<br />
'''Tuesday, July 14```<br />
<br />
9:31 am:<br />
<br />
* Don't forget "%matplotlib inline" at the top of your notebooks.<br />
<br />
'''Monday, July 13'''<br />
<br />
2:18 pm: dependencies:<br />
<br />
* apt-get install: git, python-dev, pip, python-scipy, python-matplotlib<br />
* Python packages: pip, boto, boto3, matplotlib, ipython, numpy, scipy, scikit-learn, librosa, mir_eval, seaborn, requests<br />
* (Anaconda)<br />
<br />
11:11 am: Your post-it notes:<br />
<br />
* content-based analysis e.g. classifying violin playing style (vibrato, bowing)<br />
* MIR overview; music recommendation<br />
* feature extraction; dimensionality reduction; prediction<br />
* source separation techniques<br />
* chord estimation; "split" musical instruments; find beats in a song<br />
* audio-to-midi; signal/source/speaker separation; programming audio in Python (in general)<br />
* acoustic fingerprinting<br />
* machine learning; turn analysis -> synth; music characterization<br />
* beat tracking; ways of identifying timbre<br />
* mood recognition<br />
* instrument separation; real-time processing<br />
* Marsyas?<br />
* speed of retrieval<br />
* what's possible and what's not in music information retrieval; how to use MIR toolbox for fast realization of ideas<br />
* machine learning techniques for more general audio problems i.e. language detection or identifying sound sources<br />
* networking and getting to know you all<br />
<br />
== Logistics ==<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop Structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule ==<br />
<br />
Instructional material can be found at [http://musicinformationretrieval.com musicinformationretrieval.com] (read only) or on [https://github.com/stevetjoa/stanford-mir GitHub] (full source).<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis, and Feature Extraction ===<br />
<br />
'''Lecture'''<br />
<br />
Introductions<br />
* CCRMA Introduction - (Nette, Fernando). <br />
* Introduction to MIR (What is MIR? Why MIR? Commercial applications) <br />
* Basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Classification: Instance-based classifiers (k-NN) <br />
<br />
Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
* Time-domain features<br />
* Frequency-domain features<br />
<br />
MFCCs sonified<br />
* Original track ("Chewing Gum") [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694]<br />
* MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/]<br />
<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/feature_sonification.html Understanding Audio Features Through Sonification]<br />
<br />
* Background for students needing a refresher: [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
* ''Reminder'': Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Machine Learning, Clustering and Classification ===<br />
<br />
'''Lecture'''<br />
<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/kmeans_instrument_classification.html MFCC, K-Means Clustering]<br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means (2012)]<br />
<br />
<br />
<br />
=== Day 3: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf List of beat tracking references]<br />
<br />
Onset Detection<br />
* Time-domain differences<br />
* Spectral-domain differences<br />
* Perceptual data-warping<br />
* Adaptive onset detection<br />
<br />
Beat and Tempo<br />
* IOIs and Beat Regularity, Rubato<br />
* Tatum, Tactus and Meter levels<br />
* Tempo estimation<br />
* Onset-detection vs Beat-detection<br />
* The Onset Detection Function<br />
* Beat Histograms<br />
* Fluctuation Patterns<br />
* Joint estimation of downbeat and chord change<br />
<br />
Approaches to Beat Tracking and Meter Estimation<br />
* Autocorrelation<br />
* Beat Spectrum measures<br />
* Multi-resolution (Wavelet)<br />
<br />
Pitch and Chroma<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab''' <br />
<br />
Part 1: Tempo Extraction<br />
<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
<br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
<br />
Bonus Slides: Temporal & Harmony Analysis <br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
'''Lecture'''<br />
<br />
Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Evaluation Metrics for Information Retrieval<br />
<br />
'''Lab'''<br />
<br />
[https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Day 5: Deep Belief Networks and Wavelets ===<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
== Software Libraries ==<br />
<br />
* [https://www.python.org/ Python]<br />
* [http://www.numpy.org/ NumPy]<br />
* [http://www.scipy.org/ SciPy]<br />
* [http://ipython.org/ IPython]<br />
* [http://scikit-learn.org/stable/ scikit-learn]<br />
* [http://bmcfee.github.io/librosa/ librosa]<br />
* [http://craffel.github.io/mir_eval/ mir_eval]<br />
* [http://essentia.upf.edu/ Essentia]<br />
* [http://www.vamp-plugins.org/vampy.html VamPy]<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== Additional References == <br />
<br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18245MIR workshop 20152015-07-14T16:15:59Z<p>Kiemyang: /* Schedule */</p>
<hr />
<div>''' Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval '''<br />
<br />
== News ==<br />
<br />
'''Monday, July 13'''<br />
<br />
2:18 pm: dependencies:<br />
<br />
* apt-get install: git, python-dev, pip, python-scipy, python-matplotlib<br />
* Python packages: pip, boto, boto3, matplotlib, ipython, numpy, scipy, scikit-learn, librosa, mir_eval, seaborn, requests<br />
* (Anaconda)<br />
<br />
11:11 am: Your post-it notes:<br />
<br />
* content-based analysis e.g. classifying violin playing style (vibrato, bowing)<br />
* MIR overview; music recommendation<br />
* feature extraction; dimensionality reduction; prediction<br />
* source separation techniques<br />
* chord estimation; "split" musical instruments; find beats in a song<br />
* audio-to-midi; signal/source/speaker separation; programming audio in Python (in general)<br />
* acoustic fingerprinting<br />
* machine learning; turn analysis -> synth; music characterization<br />
* beat tracking; ways of identifying timbre<br />
* mood recognition<br />
* instrument separation; real-time processing<br />
* Marsyas?<br />
* speed of retrieval<br />
* what's possible and what's not in music information retrieval; how to use MIR toolbox for fast realization of ideas<br />
* machine learning techniques for more general audio problems i.e. language detection or identifying sound sources<br />
* networking and getting to know you all<br />
<br />
== Logistics ==<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop Structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule ==<br />
<br />
Instructional material can be found at [http://musicinformationretrieval.com musicinformationretrieval.com] (read only) or on [https://github.com/stevetjoa/stanford-mir GitHub] (full source).<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis, and Feature Extraction ===<br />
<br />
'''Lecture'''<br />
<br />
Introductions<br />
* CCRMA Introduction - (Nette, Fernando). <br />
* Introduction to MIR (What is MIR? Why MIR? Commercial applications) <br />
* Basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Classification: Instance-based classifiers (k-NN) <br />
<br />
Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
* Time-domain features<br />
* Frequency-domain features<br />
<br />
MFCCs sonified<br />
* Original track ("Chewing Gum") [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694]<br />
* MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/]<br />
<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/feature_sonification.html Understanding Audio Features Through Sonification]<br />
<br />
* Background for students needing a refresher: [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
* ''Reminder'': Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Machine Learning, Clustering and Classification ===<br />
<br />
'''Lecture'''<br />
<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/kmeans_instrument_classification.html MFCC, K-Means Clustering]<br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means (2012)]<br />
<br />
<br />
<br />
=== Day 3: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf List of beat tracking references]<br />
<br />
Onset Detection<br />
* Time-domain differences<br />
* Spectral-domain differences<br />
* Perceptual data-warping<br />
* Adaptive onset detection<br />
<br />
Beat and Tempo<br />
* IOIs and Beat Regularity, Rubato<br />
* Tatum, Tactus and Meter levels<br />
* Tempo estimation<br />
* Onset-detection vs Beat-detection<br />
* The Onset Detection Function<br />
* Beat Histograms<br />
* Fluctuation Patterns<br />
* Joint estimation of downbeat and chord change<br />
<br />
Approaches to Beat Tracking and Meter Estimation<br />
* Autocorrelation<br />
* Beat Spectrum measures<br />
* Multi-resolution (Wavelet)<br />
<br />
Pitch and Chroma<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab''' <br />
<br />
Part 1: Tempo Extraction<br />
<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
<br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
<br />
Bonus Slides: Temporal & Harmony Analysis <br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
'''Lecture'''<br />
<br />
Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Evaluation Metrics for Information Retrieval<br />
<br />
'''Lab'''<br />
<br />
[https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Day 5: Deep Belief Networks and Wavelets ===<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
== Software Libraries ==<br />
<br />
* [https://www.python.org/ Python]<br />
* [http://www.numpy.org/ NumPy]<br />
* [http://www.scipy.org/ SciPy]<br />
* [http://ipython.org/ IPython]<br />
* [http://scikit-learn.org/stable/ scikit-learn]<br />
* [http://bmcfee.github.io/librosa/ librosa]<br />
* [http://craffel.github.io/mir_eval/ mir_eval]<br />
* [http://essentia.upf.edu/ Essentia]<br />
* [http://www.vamp-plugins.org/vampy.html VamPy]<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== Additional References == <br />
<br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18244MIR workshop 20152015-07-14T16:08:36Z<p>Kiemyang: /* News */</p>
<hr />
<div>''' Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval '''<br />
<br />
== News ==<br />
<br />
'''Monday, July 13'''<br />
<br />
2:18 pm: dependencies:<br />
<br />
* apt-get install: git, python-dev, pip, python-scipy, python-matplotlib<br />
* Python packages: pip, boto, boto3, matplotlib, ipython, numpy, scipy, scikit-learn, librosa, mir_eval, seaborn, requests<br />
* (Anaconda)<br />
<br />
11:11 am: Your post-it notes:<br />
<br />
* content-based analysis e.g. classifying violin playing style (vibrato, bowing)<br />
* MIR overview; music recommendation<br />
* feature extraction; dimensionality reduction; prediction<br />
* source separation techniques<br />
* chord estimation; "split" musical instruments; find beats in a song<br />
* audio-to-midi; signal/source/speaker separation; programming audio in Python (in general)<br />
* acoustic fingerprinting<br />
* machine learning; turn analysis -> synth; music characterization<br />
* beat tracking; ways of identifying timbre<br />
* mood recognition<br />
* instrument separation; real-time processing<br />
* Marsyas?<br />
* speed of retrieval<br />
* what's possible and what's not in music information retrieval; how to use MIR toolbox for fast realization of ideas<br />
* machine learning techniques for more general audio problems i.e. language detection or identifying sound sources<br />
* networking and getting to know you all<br />
<br />
== Logistics ==<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop Structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule ==<br />
<br />
Instructional material can be found at [http://musicinformationretrieval.com musicinformationretrieval.com] (read only) or on [https://github.com/stevetjoa/stanford-mir GitHub] (full source).<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis, and Feature Extraction ===<br />
<br />
'''Lecture'''<br />
<br />
Introductions<br />
* CCRMA Introduction - (Nette, Fernando). <br />
* Introduction to MIR (What is MIR? Why MIR? Commercial applications) <br />
* Basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Classification: Instance-based classifiers (k-NN) <br />
<br />
Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
* Time-domain features<br />
* Frequency-domain features<br />
<br />
MFCCs sonified<br />
* Original track ("Chewing Gum") [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694]<br />
* MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/]<br />
<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/feature_sonification.html Understanding Audio Features Through Sonification]<br />
<br />
* Background for students needing a refresher: [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
* ''Reminder'': Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf List of beat tracking references]<br />
<br />
Onset Detection<br />
* Time-domain differences<br />
* Spectral-domain differences<br />
* Perceptual data-warping<br />
* Adaptive onset detection<br />
<br />
Beat and Tempo<br />
* IOIs and Beat Regularity, Rubato<br />
* Tatum, Tactus and Meter levels<br />
* Tempo estimation<br />
* Onset-detection vs Beat-detection<br />
* The Onset Detection Function<br />
* Beat Histograms<br />
* Fluctuation Patterns<br />
* Joint estimation of downbeat and chord change<br />
<br />
Approaches to Beat Tracking and Meter Estimation<br />
* Autocorrelation<br />
* Beat Spectrum measures<br />
* Multi-resolution (Wavelet)<br />
<br />
Pitch and Chroma<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab''' <br />
<br />
Part 1: Tempo Extraction<br />
<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
<br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
<br />
Bonus Slides: Temporal & Harmony Analysis <br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
<br />
'''Lecture'''<br />
<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/kmeans_instrument_classification.html MFCC, K-Means Clustering]<br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means (2012)]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
'''Lecture'''<br />
<br />
Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Evaluation Metrics for Information Retrieval<br />
<br />
'''Lab'''<br />
<br />
[https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Day 5: Deep Belief Networks and Wavelets ===<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
== Software Libraries ==<br />
<br />
* [https://www.python.org/ Python]<br />
* [http://www.numpy.org/ NumPy]<br />
* [http://www.scipy.org/ SciPy]<br />
* [http://ipython.org/ IPython]<br />
* [http://scikit-learn.org/stable/ scikit-learn]<br />
* [http://bmcfee.github.io/librosa/ librosa]<br />
* [http://craffel.github.io/mir_eval/ mir_eval]<br />
* [http://essentia.upf.edu/ Essentia]<br />
* [http://www.vamp-plugins.org/vampy.html VamPy]<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== Additional References == <br />
<br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18243MIR workshop 20152015-07-13T21:21:14Z<p>Kiemyang: /* News */</p>
<hr />
<div>''' Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval '''<br />
<br />
== News ==<br />
<br />
'''Monday, July 13'''<br />
<br />
2:18 pm: dependencies:<br />
<br />
* (Anaconda)<br />
* git, pip, boto, boto3, matplotlib, ipython, numpy, scipy, scikit-learn, librosa, sox, python-dev, mir_eval, seaborn, requests<br />
<br />
11:11 am: Your post-it notes:<br />
<br />
* content-based analysis e.g. classifying violin playing style (vibrato, bowing)<br />
* MIR overview; music recommendation<br />
* feature extraction; dimensionality reduction; prediction<br />
* source separation techniques<br />
* chord estimation; "split" musical instruments; find beats in a song<br />
* audio-to-midi; signal/source/speaker separation; programming audio in Python (in general)<br />
* acoustic fingerprinting<br />
* machine learning; turn analysis -> synth; music characterization<br />
* beat tracking; ways of identifying timbre<br />
* mood recognition<br />
* instrument separation; real-time processing<br />
* Marsyas?<br />
* speed of retrieval<br />
* what's possible and what's not in music information retrieval; how to use MIR toolbox for fast realization of ideas<br />
* machine learning techniques for more general audio problems i.e. language detection or identifying sound sources<br />
* networking and getting to know you all<br />
<br />
== Logistics ==<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop Structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule ==<br />
<br />
Instructional material can be found at [http://musicinformationretrieval.com musicinformationretrieval.com] (read only) or on [https://github.com/stevetjoa/stanford-mir GitHub] (full source).<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis, and Feature Extraction ===<br />
<br />
'''Lecture'''<br />
<br />
Introductions<br />
* CCRMA Introduction - (Nette, Fernando). <br />
* Introduction to MIR (What is MIR? Why MIR? Commercial applications) <br />
* Basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Classification: Instance-based classifiers (k-NN) <br />
<br />
Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
* Time-domain features<br />
* Frequency-domain features<br />
<br />
MFCCs sonified<br />
* Original track ("Chewing Gum") [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694]<br />
* MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/]<br />
<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/feature_sonification.html Understanding Audio Features Through Sonification]<br />
<br />
* Background for students needing a refresher: [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
* ''Reminder'': Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf List of beat tracking references]<br />
<br />
Onset Detection<br />
* Time-domain differences<br />
* Spectral-domain differences<br />
* Perceptual data-warping<br />
* Adaptive onset detection<br />
<br />
Beat and Tempo<br />
* IOIs and Beat Regularity, Rubato<br />
* Tatum, Tactus and Meter levels<br />
* Tempo estimation<br />
* Onset-detection vs Beat-detection<br />
* The Onset Detection Function<br />
* Beat Histograms<br />
* Fluctuation Patterns<br />
* Joint estimation of downbeat and chord change<br />
<br />
Approaches to Beat Tracking and Meter Estimation<br />
* Autocorrelation<br />
* Beat Spectrum measures<br />
* Multi-resolution (Wavelet)<br />
<br />
Pitch and Chroma<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab''' <br />
<br />
Part 1: Tempo Extraction<br />
<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
<br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
<br />
Bonus Slides: Temporal & Harmony Analysis <br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
<br />
'''Lecture'''<br />
<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/kmeans_instrument_classification.html MFCC, K-Means Clustering]<br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means (2012)]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
'''Lecture'''<br />
<br />
Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Evaluation Metrics for Information Retrieval<br />
<br />
'''Lab'''<br />
<br />
[https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Day 5: Deep Belief Networks and Wavelets ===<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
== Software Libraries ==<br />
<br />
* [https://www.python.org/ Python]<br />
* [http://www.numpy.org/ NumPy]<br />
* [http://www.scipy.org/ SciPy]<br />
* [http://ipython.org/ IPython]<br />
* [http://scikit-learn.org/stable/ scikit-learn]<br />
* [http://bmcfee.github.io/librosa/ librosa]<br />
* [http://craffel.github.io/mir_eval/ mir_eval]<br />
* [http://essentia.upf.edu/ Essentia]<br />
* [http://www.vamp-plugins.org/vampy.html VamPy]<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== Additional References == <br />
<br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18242MIR workshop 20152015-07-13T18:19:14Z<p>Kiemyang: </p>
<hr />
<div>''' Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval '''<br />
<br />
== News ==<br />
<br />
'''Monday, July 13'''<br />
<br />
11:11 am: Your post-it notes:<br />
<br />
* content-based analysis e.g. classifying violin playing style (vibrato, bowing)<br />
* MIR overview; music recommendation<br />
* feature extraction; dimensionality reduction; prediction<br />
* source separation techniques<br />
* chord estimation; "split" musical instruments; find beats in a song<br />
* audio-to-midi; signal/source/speaker separation; programming audio in Python (in general)<br />
* acoustic fingerprinting<br />
* machine learning; turn analysis -> synth; music characterization<br />
* beat tracking; ways of identifying timbre<br />
* mood recognition<br />
* instrument separation; real-time processing<br />
* Marsyas?<br />
* speed of retrieval<br />
* what's possible and what's not in music information retrieval; how to use MIR toolbox for fast realization of ideas<br />
* machine learning techniques for more general audio problems i.e. language detection or identifying sound sources<br />
* networking and getting to know you all<br />
<br />
== Logistics ==<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop Structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule ==<br />
<br />
Instructional material can be found at [http://musicinformationretrieval.com musicinformationretrieval.com] (read only) or on [https://github.com/stevetjoa/stanford-mir GitHub] (full source).<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis, and Feature Extraction ===<br />
<br />
'''Lecture'''<br />
<br />
Introductions<br />
* CCRMA Introduction - (Nette, Fernando). <br />
* Introduction to MIR (What is MIR? Why MIR? Commercial applications) <br />
* Basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Classification: Instance-based classifiers (k-NN) <br />
<br />
Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
* Time-domain features<br />
* Frequency-domain features<br />
<br />
MFCCs sonified<br />
* Original track ("Chewing Gum") [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694]<br />
* MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/]<br />
<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/feature_sonification.html Understanding Audio Features Through Sonification]<br />
<br />
* Background for students needing a refresher: [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
* ''Reminder'': Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf List of beat tracking references]<br />
<br />
Onset Detection<br />
* Time-domain differences<br />
* Spectral-domain differences<br />
* Perceptual data-warping<br />
* Adaptive onset detection<br />
<br />
Beat and Tempo<br />
* IOIs and Beat Regularity, Rubato<br />
* Tatum, Tactus and Meter levels<br />
* Tempo estimation<br />
* Onset-detection vs Beat-detection<br />
* The Onset Detection Function<br />
* Beat Histograms<br />
* Fluctuation Patterns<br />
* Joint estimation of downbeat and chord change<br />
<br />
Approaches to Beat Tracking and Meter Estimation<br />
* Autocorrelation<br />
* Beat Spectrum measures<br />
* Multi-resolution (Wavelet)<br />
<br />
Pitch and Chroma<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab''' <br />
<br />
Part 1: Tempo Extraction<br />
<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
<br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
<br />
Bonus Slides: Temporal & Harmony Analysis <br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
<br />
'''Lecture'''<br />
<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/kmeans_instrument_classification.html MFCC, K-Means Clustering]<br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means (2012)]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
'''Lecture'''<br />
<br />
Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Evaluation Metrics for Information Retrieval<br />
<br />
'''Lab'''<br />
<br />
[https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Day 5: Deep Belief Networks and Wavelets ===<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
== Software Libraries ==<br />
<br />
* [https://www.python.org/ Python]<br />
* [http://www.numpy.org/ NumPy]<br />
* [http://www.scipy.org/ SciPy]<br />
* [http://ipython.org/ IPython]<br />
* [http://scikit-learn.org/stable/ scikit-learn]<br />
* [http://bmcfee.github.io/librosa/ librosa]<br />
* [http://craffel.github.io/mir_eval/ mir_eval]<br />
* [http://essentia.upf.edu/ Essentia]<br />
* [http://www.vamp-plugins.org/vampy.html VamPy]<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== Additional References == <br />
<br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18240MIR workshop 20152015-07-10T20:51:33Z<p>Kiemyang: /* References for additional info */</p>
<hr />
<div>''' Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval '''<br />
<br />
== Logistics ==<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop Structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule ==<br />
<br />
Instructional material can be found at [http://musicinformationretrieval.com musicinformationretrieval.com] (read only) or on [https://github.com/stevetjoa/stanford-mir GitHub] (full source).<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis, and Feature Extraction ===<br />
<br />
'''Lecture'''<br />
<br />
Introductions<br />
* CCRMA Introduction - (Nette, Fernando). <br />
* Introduction to MIR (What is MIR? Why MIR? Commercial applications) <br />
* Basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Classification: Instance-based classifiers (k-NN) <br />
<br />
Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
* Time-domain features<br />
* Frequency-domain features<br />
<br />
MFCCs sonified<br />
* Original track ("Chewing Gum") [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694]<br />
* MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/]<br />
<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/feature_sonification.html Understanding Audio Features Through Sonification]<br />
<br />
* Background for students needing a refresher: [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
* ''Reminder'': Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf List of beat tracking references]<br />
<br />
Onset Detection<br />
* Time-domain differences<br />
* Spectral-domain differences<br />
* Perceptual data-warping<br />
* Adaptive onset detection<br />
<br />
Beat and Tempo<br />
* IOIs and Beat Regularity, Rubato<br />
* Tatum, Tactus and Meter levels<br />
* Tempo estimation<br />
* Onset-detection vs Beat-detection<br />
* The Onset Detection Function<br />
* Beat Histograms<br />
* Fluctuation Patterns<br />
* Joint estimation of downbeat and chord change<br />
<br />
Approaches to Beat Tracking and Meter Estimation<br />
* Autocorrelation<br />
* Beat Spectrum measures<br />
* Multi-resolution (Wavelet)<br />
<br />
Pitch and Chroma<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab''' <br />
<br />
Part 1: Tempo Extraction<br />
<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
<br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
<br />
Bonus Slides: Temporal & Harmony Analysis <br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
<br />
'''Lecture'''<br />
<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/kmeans_instrument_classification.html MFCC, K-Means Clustering]<br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means (2012)]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
'''Lecture'''<br />
<br />
Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Evaluation Metrics for Information Retrieval<br />
<br />
'''Lab'''<br />
<br />
[https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Day 5: Deep Belief Networks and Wavelets ===<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
== Software Libraries ==<br />
<br />
* [https://www.python.org/ Python]<br />
* [http://www.numpy.org/ NumPy]<br />
* [http://www.scipy.org/ SciPy]<br />
* [http://ipython.org/ IPython]<br />
* [http://scikit-learn.org/stable/ scikit-learn]<br />
* [http://bmcfee.github.io/librosa/ librosa]<br />
* [http://craffel.github.io/mir_eval/ mir_eval]<br />
* [http://essentia.upf.edu/ Essentia]<br />
* [http://www.vamp-plugins.org/vampy.html VamPy]<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== Additional References == <br />
<br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18239MIR workshop 20152015-07-10T20:50:09Z<p>Kiemyang: /* MATLAB Utility Scripts */</p>
<hr />
<div>''' Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval '''<br />
<br />
== Logistics ==<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop Structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule ==<br />
<br />
Instructional material can be found at [http://musicinformationretrieval.com musicinformationretrieval.com] (read only) or on [https://github.com/stevetjoa/stanford-mir GitHub] (full source).<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis, and Feature Extraction ===<br />
<br />
'''Lecture'''<br />
<br />
Introductions<br />
* CCRMA Introduction - (Nette, Fernando). <br />
* Introduction to MIR (What is MIR? Why MIR? Commercial applications) <br />
* Basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Classification: Instance-based classifiers (k-NN) <br />
<br />
Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
* Time-domain features<br />
* Frequency-domain features<br />
<br />
MFCCs sonified<br />
* Original track ("Chewing Gum") [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694]<br />
* MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/]<br />
<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/feature_sonification.html Understanding Audio Features Through Sonification]<br />
<br />
* Background for students needing a refresher: [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
* ''Reminder'': Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf List of beat tracking references]<br />
<br />
Onset Detection<br />
* Time-domain differences<br />
* Spectral-domain differences<br />
* Perceptual data-warping<br />
* Adaptive onset detection<br />
<br />
Beat and Tempo<br />
* IOIs and Beat Regularity, Rubato<br />
* Tatum, Tactus and Meter levels<br />
* Tempo estimation<br />
* Onset-detection vs Beat-detection<br />
* The Onset Detection Function<br />
* Beat Histograms<br />
* Fluctuation Patterns<br />
* Joint estimation of downbeat and chord change<br />
<br />
Approaches to Beat Tracking and Meter Estimation<br />
* Autocorrelation<br />
* Beat Spectrum measures<br />
* Multi-resolution (Wavelet)<br />
<br />
Pitch and Chroma<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab''' <br />
<br />
Part 1: Tempo Extraction<br />
<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
<br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
<br />
Bonus Slides: Temporal & Harmony Analysis <br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
<br />
'''Lecture'''<br />
<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/kmeans_instrument_classification.html MFCC, K-Means Clustering]<br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means (2012)]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
'''Lecture'''<br />
<br />
Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Evaluation Metrics for Information Retrieval<br />
<br />
'''Lab'''<br />
<br />
[https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Day 5: Deep Belief Networks and Wavelets ===<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
== Software Libraries ==<br />
<br />
* [https://www.python.org/ Python]<br />
* [http://www.numpy.org/ NumPy]<br />
* [http://www.scipy.org/ SciPy]<br />
* [http://ipython.org/ IPython]<br />
* [http://scikit-learn.org/stable/ scikit-learn]<br />
* [http://bmcfee.github.io/librosa/ librosa]<br />
* [http://craffel.github.io/mir_eval/ mir_eval]<br />
* [http://essentia.upf.edu/ Essentia]<br />
* [http://www.vamp-plugins.org/vampy.html VamPy]<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18238MIR workshop 20152015-07-10T20:49:44Z<p>Kiemyang: /* software, libraries, examples */</p>
<hr />
<div>''' Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval '''<br />
<br />
== Logistics ==<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop Structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule ==<br />
<br />
Instructional material can be found at [http://musicinformationretrieval.com musicinformationretrieval.com] (read only) or on [https://github.com/stevetjoa/stanford-mir GitHub] (full source).<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis, and Feature Extraction ===<br />
<br />
'''Lecture'''<br />
<br />
Introductions<br />
* CCRMA Introduction - (Nette, Fernando). <br />
* Introduction to MIR (What is MIR? Why MIR? Commercial applications) <br />
* Basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Classification: Instance-based classifiers (k-NN) <br />
<br />
Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
* Time-domain features<br />
* Frequency-domain features<br />
<br />
MFCCs sonified<br />
* Original track ("Chewing Gum") [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694]<br />
* MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/]<br />
<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/feature_sonification.html Understanding Audio Features Through Sonification]<br />
<br />
* Background for students needing a refresher: [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
* ''Reminder'': Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf List of beat tracking references]<br />
<br />
Onset Detection<br />
* Time-domain differences<br />
* Spectral-domain differences<br />
* Perceptual data-warping<br />
* Adaptive onset detection<br />
<br />
Beat and Tempo<br />
* IOIs and Beat Regularity, Rubato<br />
* Tatum, Tactus and Meter levels<br />
* Tempo estimation<br />
* Onset-detection vs Beat-detection<br />
* The Onset Detection Function<br />
* Beat Histograms<br />
* Fluctuation Patterns<br />
* Joint estimation of downbeat and chord change<br />
<br />
Approaches to Beat Tracking and Meter Estimation<br />
* Autocorrelation<br />
* Beat Spectrum measures<br />
* Multi-resolution (Wavelet)<br />
<br />
Pitch and Chroma<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab''' <br />
<br />
Part 1: Tempo Extraction<br />
<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
<br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
<br />
Bonus Slides: Temporal & Harmony Analysis <br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
<br />
'''Lecture'''<br />
<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/kmeans_instrument_classification.html MFCC, K-Means Clustering]<br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means (2012)]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
'''Lecture'''<br />
<br />
Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Evaluation Metrics for Information Retrieval<br />
<br />
'''Lab'''<br />
<br />
[https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Day 5: Deep Belief Networks and Wavelets ===<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
== Software Libraries ==<br />
<br />
* [https://www.python.org/ Python]<br />
* [http://www.numpy.org/ NumPy]<br />
* [http://www.scipy.org/ SciPy]<br />
* [http://ipython.org/ IPython]<br />
* [http://scikit-learn.org/stable/ scikit-learn]<br />
* [http://bmcfee.github.io/librosa/ librosa]<br />
* [http://craffel.github.io/mir_eval/ mir_eval]<br />
* [http://essentia.upf.edu/ Essentia]<br />
* [http://www.vamp-plugins.org/vampy.html VamPy]<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/<br />
<br />
[[MIR_workshop_2014]]<br />
<br />
=== Bonus Lab Material from Previous Years (Matlab) ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18237MIR workshop 20152015-07-10T20:49:18Z<p>Kiemyang: /* software, libraries, examples */</p>
<hr />
<div>''' Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval '''<br />
<br />
== Logistics ==<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop Structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule ==<br />
<br />
Instructional material can be found at [http://musicinformationretrieval.com musicinformationretrieval.com] (read only) or on [https://github.com/stevetjoa/stanford-mir GitHub] (full source).<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis, and Feature Extraction ===<br />
<br />
'''Lecture'''<br />
<br />
Introductions<br />
* CCRMA Introduction - (Nette, Fernando). <br />
* Introduction to MIR (What is MIR? Why MIR? Commercial applications) <br />
* Basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Classification: Instance-based classifiers (k-NN) <br />
<br />
Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
* Time-domain features<br />
* Frequency-domain features<br />
<br />
MFCCs sonified<br />
* Original track ("Chewing Gum") [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694]<br />
* MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/]<br />
<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/feature_sonification.html Understanding Audio Features Through Sonification]<br />
<br />
* Background for students needing a refresher: [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
* ''Reminder'': Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf List of beat tracking references]<br />
<br />
Onset Detection<br />
* Time-domain differences<br />
* Spectral-domain differences<br />
* Perceptual data-warping<br />
* Adaptive onset detection<br />
<br />
Beat and Tempo<br />
* IOIs and Beat Regularity, Rubato<br />
* Tatum, Tactus and Meter levels<br />
* Tempo estimation<br />
* Onset-detection vs Beat-detection<br />
* The Onset Detection Function<br />
* Beat Histograms<br />
* Fluctuation Patterns<br />
* Joint estimation of downbeat and chord change<br />
<br />
Approaches to Beat Tracking and Meter Estimation<br />
* Autocorrelation<br />
* Beat Spectrum measures<br />
* Multi-resolution (Wavelet)<br />
<br />
Pitch and Chroma<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab''' <br />
<br />
Part 1: Tempo Extraction<br />
<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
<br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
<br />
Bonus Slides: Temporal & Harmony Analysis <br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
<br />
'''Lecture'''<br />
<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/kmeans_instrument_classification.html MFCC, K-Means Clustering]<br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means (2012)]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
'''Lecture'''<br />
<br />
Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Evaluation Metrics for Information Retrieval<br />
<br />
'''Lab'''<br />
<br />
[https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Day 5: Deep Belief Networks and Wavelets ===<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
== software, libraries, examples ==<br />
<br />
* [https://www.python.org/ Python]<br />
* [http://www.numpy.org/ NumPy]<br />
* [http://www.scipy.org/ SciPy]<br />
* [http://ipython.org/ IPython]<br />
* [http://scikit-learn.org/stable/ scikit-learn]<br />
* [http://bmcfee.github.io/librosa/ librosa]<br />
* [http://craffel.github.io/mir_eval/ mir_eval]<br />
* [http://essentia.upf.edu/ Essentia]<br />
* [http://www.vamp-plugins.org/vampy.html VamPy]<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/<br />
<br />
[[MIR_workshop_2014]]<br />
<br />
=== Bonus Lab Material from Previous Years (Matlab) ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18236MIR workshop 20152015-07-10T20:44:19Z<p>Kiemyang: /* Day 5: Deep Belief Networks and Wavelets */</p>
<hr />
<div>''' Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval '''<br />
<br />
== Logistics ==<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop Structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule ==<br />
<br />
Instructional material can be found at [http://musicinformationretrieval.com musicinformationretrieval.com] (read only) or on [https://github.com/stevetjoa/stanford-mir GitHub] (full source).<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis, and Feature Extraction ===<br />
<br />
'''Lecture'''<br />
<br />
Introductions<br />
* CCRMA Introduction - (Nette, Fernando). <br />
* Introduction to MIR (What is MIR? Why MIR? Commercial applications) <br />
* Basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Classification: Instance-based classifiers (k-NN) <br />
<br />
Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
* Time-domain features<br />
* Frequency-domain features<br />
<br />
MFCCs sonified<br />
* Original track ("Chewing Gum") [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694]<br />
* MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/]<br />
<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/feature_sonification.html Understanding Audio Features Through Sonification]<br />
<br />
* Background for students needing a refresher: [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
* ''Reminder'': Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf List of beat tracking references]<br />
<br />
Onset Detection<br />
* Time-domain differences<br />
* Spectral-domain differences<br />
* Perceptual data-warping<br />
* Adaptive onset detection<br />
<br />
Beat and Tempo<br />
* IOIs and Beat Regularity, Rubato<br />
* Tatum, Tactus and Meter levels<br />
* Tempo estimation<br />
* Onset-detection vs Beat-detection<br />
* The Onset Detection Function<br />
* Beat Histograms<br />
* Fluctuation Patterns<br />
* Joint estimation of downbeat and chord change<br />
<br />
Approaches to Beat Tracking and Meter Estimation<br />
* Autocorrelation<br />
* Beat Spectrum measures<br />
* Multi-resolution (Wavelet)<br />
<br />
Pitch and Chroma<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab''' <br />
<br />
Part 1: Tempo Extraction<br />
<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
<br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
<br />
Bonus Slides: Temporal & Harmony Analysis <br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
<br />
'''Lecture'''<br />
<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/kmeans_instrument_classification.html MFCC, K-Means Clustering]<br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means (2012)]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
'''Lecture'''<br />
<br />
Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Evaluation Metrics for Information Retrieval<br />
<br />
'''Lab'''<br />
<br />
[https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Day 5: Deep Belief Networks and Wavelets ===<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/<br />
<br />
[[MIR_workshop_2014]]<br />
<br />
=== Bonus Lab Material from Previous Years (Matlab) ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18235MIR workshop 20152015-07-10T20:43:48Z<p>Kiemyang: /* Day 4: Music Information Retrieval in Polyphonic Mixtures */</p>
<hr />
<div>''' Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval '''<br />
<br />
== Logistics ==<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop Structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule ==<br />
<br />
Instructional material can be found at [http://musicinformationretrieval.com musicinformationretrieval.com] (read only) or on [https://github.com/stevetjoa/stanford-mir GitHub] (full source).<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis, and Feature Extraction ===<br />
<br />
'''Lecture'''<br />
<br />
Introductions<br />
* CCRMA Introduction - (Nette, Fernando). <br />
* Introduction to MIR (What is MIR? Why MIR? Commercial applications) <br />
* Basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Classification: Instance-based classifiers (k-NN) <br />
<br />
Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
* Time-domain features<br />
* Frequency-domain features<br />
<br />
MFCCs sonified<br />
* Original track ("Chewing Gum") [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694]<br />
* MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/]<br />
<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/feature_sonification.html Understanding Audio Features Through Sonification]<br />
<br />
* Background for students needing a refresher: [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
* ''Reminder'': Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf List of beat tracking references]<br />
<br />
Onset Detection<br />
* Time-domain differences<br />
* Spectral-domain differences<br />
* Perceptual data-warping<br />
* Adaptive onset detection<br />
<br />
Beat and Tempo<br />
* IOIs and Beat Regularity, Rubato<br />
* Tatum, Tactus and Meter levels<br />
* Tempo estimation<br />
* Onset-detection vs Beat-detection<br />
* The Onset Detection Function<br />
* Beat Histograms<br />
* Fluctuation Patterns<br />
* Joint estimation of downbeat and chord change<br />
<br />
Approaches to Beat Tracking and Meter Estimation<br />
* Autocorrelation<br />
* Beat Spectrum measures<br />
* Multi-resolution (Wavelet)<br />
<br />
Pitch and Chroma<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab''' <br />
<br />
Part 1: Tempo Extraction<br />
<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
<br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
<br />
Bonus Slides: Temporal & Harmony Analysis <br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
<br />
'''Lecture'''<br />
<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/kmeans_instrument_classification.html MFCC, K-Means Clustering]<br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means (2012)]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
'''Lecture'''<br />
<br />
Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Evaluation Metrics for Information Retrieval<br />
<br />
'''Lab'''<br />
<br />
[https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Day 5: Deep Belief Networks and Wavelets ===<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
Afternoon: CCRMA Lawn BBQ<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/<br />
<br />
[[MIR_workshop_2014]]<br />
<br />
=== Bonus Lab Material from Previous Years (Matlab) ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18234MIR workshop 20152015-07-10T20:36:02Z<p>Kiemyang: /* Day 3: Machine Learning, Clustering and Classification */</p>
<hr />
<div>''' Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval '''<br />
<br />
== Logistics ==<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop Structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule ==<br />
<br />
Instructional material can be found at [http://musicinformationretrieval.com musicinformationretrieval.com] (read only) or on [https://github.com/stevetjoa/stanford-mir GitHub] (full source).<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis, and Feature Extraction ===<br />
<br />
'''Lecture'''<br />
<br />
Introductions<br />
* CCRMA Introduction - (Nette, Fernando). <br />
* Introduction to MIR (What is MIR? Why MIR? Commercial applications) <br />
* Basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Classification: Instance-based classifiers (k-NN) <br />
<br />
Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
* Time-domain features<br />
* Frequency-domain features<br />
<br />
MFCCs sonified<br />
* Original track ("Chewing Gum") [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694]<br />
* MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/]<br />
<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/feature_sonification.html Understanding Audio Features Through Sonification]<br />
<br />
* Background for students needing a refresher: [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
* ''Reminder'': Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf List of beat tracking references]<br />
<br />
Onset Detection<br />
* Time-domain differences<br />
* Spectral-domain differences<br />
* Perceptual data-warping<br />
* Adaptive onset detection<br />
<br />
Beat and Tempo<br />
* IOIs and Beat Regularity, Rubato<br />
* Tatum, Tactus and Meter levels<br />
* Tempo estimation<br />
* Onset-detection vs Beat-detection<br />
* The Onset Detection Function<br />
* Beat Histograms<br />
* Fluctuation Patterns<br />
* Joint estimation of downbeat and chord change<br />
<br />
Approaches to Beat Tracking and Meter Estimation<br />
* Autocorrelation<br />
* Beat Spectrum measures<br />
* Multi-resolution (Wavelet)<br />
<br />
Pitch and Chroma<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab''' <br />
<br />
Part 1: Tempo Extraction<br />
<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
<br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
<br />
Bonus Slides: Temporal & Harmony Analysis <br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
<br />
'''Lecture'''<br />
<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/kmeans_instrument_classification.html MFCC, K-Means Clustering]<br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means (2012)]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
Lecture 6: Steve Tjoa, [http://ccrma.stanford.edu/workshops/mir2013/ccrma20130627.pdf Lecture 6 Slides]<br />
<br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Guest Lecture 7: Andreas Ehmann, MIREX <br><br />
<br />
Lecture 8: Evaluation Metrics for Information Retrieval - Leigh Smith [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_IR.pdf Slides]<br />
<br />
<br />
'''Lab 4'''<br />
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Day 5: Deep Belief Networks and Wavelets ===<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
Afternoon: CCRMA Lawn BBQ<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/<br />
<br />
[[MIR_workshop_2014]]<br />
<br />
=== Bonus Lab Material from Previous Years (Matlab) ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18233MIR workshop 20152015-07-10T20:32:50Z<p>Kiemyang: /* Day 2: Beat, Rhythm, Pitch and Chroma Analysis */</p>
<hr />
<div>''' Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval '''<br />
<br />
== Logistics ==<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop Structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule ==<br />
<br />
Instructional material can be found at [http://musicinformationretrieval.com musicinformationretrieval.com] (read only) or on [https://github.com/stevetjoa/stanford-mir GitHub] (full source).<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis, and Feature Extraction ===<br />
<br />
'''Lecture'''<br />
<br />
Introductions<br />
* CCRMA Introduction - (Nette, Fernando). <br />
* Introduction to MIR (What is MIR? Why MIR? Commercial applications) <br />
* Basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Classification: Instance-based classifiers (k-NN) <br />
<br />
Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
* Time-domain features<br />
* Frequency-domain features<br />
<br />
MFCCs sonified<br />
* Original track ("Chewing Gum") [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694]<br />
* MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/]<br />
<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/feature_sonification.html Understanding Audio Features Through Sonification]<br />
<br />
* Background for students needing a refresher: [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
* ''Reminder'': Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf List of beat tracking references]<br />
<br />
Onset Detection<br />
* Time-domain differences<br />
* Spectral-domain differences<br />
* Perceptual data-warping<br />
* Adaptive onset detection<br />
<br />
Beat and Tempo<br />
* IOIs and Beat Regularity, Rubato<br />
* Tatum, Tactus and Meter levels<br />
* Tempo estimation<br />
* Onset-detection vs Beat-detection<br />
* The Onset Detection Function<br />
* Beat Histograms<br />
* Fluctuation Patterns<br />
* Joint estimation of downbeat and chord change<br />
<br />
Approaches to Beat Tracking and Meter Estimation<br />
* Autocorrelation<br />
* Beat Spectrum measures<br />
* Multi-resolution (Wavelet)<br />
<br />
Pitch and Chroma<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab''' <br />
<br />
Part 1: Tempo Extraction<br />
<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
<br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
<br />
Bonus Slides: Temporal & Harmony Analysis <br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
Demo: iZotope Discover (Sound Similarity Search, jay) [http://www.izotope.com/tech/cloud/mediamined.asp Video] <br />
<br />
Guest Lecture: Stephen Pope (SndsLike, BirdGenie)<br />
[https://ccrma.stanford.edu/workshops/mir2014/MAT_MIR4-update.pdf MAT_MIR4-update slides]<br />
[https://ccrma.stanford.edu/workshops/mir2014/BirdsEar.pdf BirdGenie Slides]<br />
[https://ccrma.stanford.edu/workshops/mir2014/SndsLike.pdf SndsLike Slides]<br />
<br />
Lecture 5: Classification: Unsupervised vs. Supervised, k-means, GMM, SVM - Steve [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_ML.pdf Lecture 5 Slides]<br />
<br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
<br />
Matlab code for key estimation, chord recognition: <br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
Lecture 6: Steve Tjoa, [http://ccrma.stanford.edu/workshops/mir2013/ccrma20130627.pdf Lecture 6 Slides]<br />
<br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Guest Lecture 7: Andreas Ehmann, MIREX <br><br />
<br />
Lecture 8: Evaluation Metrics for Information Retrieval - Leigh Smith [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_IR.pdf Slides]<br />
<br />
<br />
'''Lab 4'''<br />
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Day 5: Deep Belief Networks and Wavelets ===<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
Afternoon: CCRMA Lawn BBQ<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/<br />
<br />
[[MIR_workshop_2014]]<br />
<br />
=== Bonus Lab Material from Previous Years (Matlab) ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18232MIR workshop 20152015-07-10T19:29:00Z<p>Kiemyang: /* Day 1: Introduction to MIR, Signal Analysis, and Feature Extraction */</p>
<hr />
<div>''' Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval '''<br />
<br />
== Logistics ==<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop Structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule ==<br />
<br />
Instructional material can be found at [http://musicinformationretrieval.com musicinformationretrieval.com] (read only) or on [https://github.com/stevetjoa/stanford-mir GitHub] (full source).<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis, and Feature Extraction ===<br />
<br />
'''Lecture'''<br />
<br />
Introductions<br />
* CCRMA Introduction - (Nette, Fernando). <br />
* Introduction to MIR (What is MIR? Why MIR? Commercial applications) <br />
* Basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Classification: Instance-based classifiers (k-NN) <br />
<br />
Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
* Time-domain features<br />
* Frequency-domain features<br />
<br />
MFCCs sonified<br />
* Original track ("Chewing Gum") [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694]<br />
* MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/]<br />
<br />
<br />
'''Lab'''<br />
<br />
[http://musicinformationretrieval.com/feature_sonification.html Understanding Audio Features Through Sonification]<br />
<br />
* Background for students needing a refresher: [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
* ''Reminder'': Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture3.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([https://discover.izotope.com/ Rhythmic Similarity])<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_pitch.pdf Lecture 4 Slides]<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
CCRMA Tour<br />
<br />
'''Lab 2:''' <br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
* See [https://github.com/stevetjoa/ccrma/blob/master/odf_of_file.m Onset Detection Function example] within the MIR matlab codebase in Octave/Matlab.<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
Demo: iZotope Discover (Sound Similarity Search, jay) [http://www.izotope.com/tech/cloud/mediamined.asp Video] <br />
<br />
Guest Lecture: Stephen Pope (SndsLike, BirdGenie)<br />
[https://ccrma.stanford.edu/workshops/mir2014/MAT_MIR4-update.pdf MAT_MIR4-update slides]<br />
[https://ccrma.stanford.edu/workshops/mir2014/BirdsEar.pdf BirdGenie Slides]<br />
[https://ccrma.stanford.edu/workshops/mir2014/SndsLike.pdf SndsLike Slides]<br />
<br />
Lecture 5: Classification: Unsupervised vs. Supervised, k-means, GMM, SVM - Steve [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_ML.pdf Lecture 5 Slides]<br />
<br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
<br />
Matlab code for key estimation, chord recognition: <br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
Lecture 6: Steve Tjoa, [http://ccrma.stanford.edu/workshops/mir2013/ccrma20130627.pdf Lecture 6 Slides]<br />
<br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Guest Lecture 7: Andreas Ehmann, MIREX <br><br />
<br />
Lecture 8: Evaluation Metrics for Information Retrieval - Leigh Smith [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_IR.pdf Slides]<br />
<br />
<br />
'''Lab 4'''<br />
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Day 5: Deep Belief Networks and Wavelets ===<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
Afternoon: CCRMA Lawn BBQ<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/<br />
<br />
[[MIR_workshop_2014]]<br />
<br />
=== Bonus Lab Material from Previous Years (Matlab) ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18231MIR workshop 20152015-07-10T19:25:03Z<p>Kiemyang: /* Day 1: Introduction to MIR, Signal Analysis and Feature Extraction */</p>
<hr />
<div>''' Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval '''<br />
<br />
== Logistics ==<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop Structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule ==<br />
<br />
Instructional material can be found at [http://musicinformationretrieval.com musicinformationretrieval.com] (read only) or on [https://github.com/stevetjoa/stanford-mir GitHub] (full source).<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis, and Feature Extraction ===<br />
<br />
'''Lecture'''<br />
<br />
* Introductions <br />
* CCRMA Introduction - (Nette, Fernando). <br />
* Introduction to MIR (What is MIR? Why MIR? Commercial applications) <br />
* Basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Classification: Instance-based classifiers (k-NN) <br />
<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
* Time-domain features<br />
* Frequency-domain features<br />
<br />
* MFCCs sonified<br />
* Original track ("Chewing Gum"): [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694]<br />
* MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/]<br />
<br />
<br />
'''Lab'''<br />
<br />
* [http://nbviewer.ipython.org/github/stevetjoa/stanford-mir/blob/master/Table_of_Contents.ipynb Lab 1 - Basic Feature Extraction and Classification] <br><br />
<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture3.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([https://discover.izotope.com/ Rhythmic Similarity])<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_pitch.pdf Lecture 4 Slides]<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
CCRMA Tour<br />
<br />
'''Lab 2:''' <br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
* See [https://github.com/stevetjoa/ccrma/blob/master/odf_of_file.m Onset Detection Function example] within the MIR matlab codebase in Octave/Matlab.<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
Demo: iZotope Discover (Sound Similarity Search, jay) [http://www.izotope.com/tech/cloud/mediamined.asp Video] <br />
<br />
Guest Lecture: Stephen Pope (SndsLike, BirdGenie)<br />
[https://ccrma.stanford.edu/workshops/mir2014/MAT_MIR4-update.pdf MAT_MIR4-update slides]<br />
[https://ccrma.stanford.edu/workshops/mir2014/BirdsEar.pdf BirdGenie Slides]<br />
[https://ccrma.stanford.edu/workshops/mir2014/SndsLike.pdf SndsLike Slides]<br />
<br />
Lecture 5: Classification: Unsupervised vs. Supervised, k-means, GMM, SVM - Steve [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_ML.pdf Lecture 5 Slides]<br />
<br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
<br />
Matlab code for key estimation, chord recognition: <br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
Lecture 6: Steve Tjoa, [http://ccrma.stanford.edu/workshops/mir2013/ccrma20130627.pdf Lecture 6 Slides]<br />
<br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Guest Lecture 7: Andreas Ehmann, MIREX <br><br />
<br />
Lecture 8: Evaluation Metrics for Information Retrieval - Leigh Smith [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_IR.pdf Slides]<br />
<br />
<br />
'''Lab 4'''<br />
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Day 5: Deep Belief Networks and Wavelets ===<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
Afternoon: CCRMA Lawn BBQ<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/<br />
<br />
[[MIR_workshop_2014]]<br />
<br />
=== Bonus Lab Material from Previous Years (Matlab) ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18230MIR workshop 20152015-07-10T19:18:20Z<p>Kiemyang: /* Schedule: Lectures & Labs */</p>
<hr />
<div>''' Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval '''<br />
<br />
== Logistics ==<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop Structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule ==<br />
<br />
Instructional material can be found at [http://musicinformationretrieval.com musicinformationretrieval.com] (read only) or on [https://github.com/stevetjoa/stanford-mir GitHub] (full source).<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis and Feature Extraction ===<br />
<br />
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture1.pdf Lecture 1 Slides]<br />
<br />
* Introductions <br />
* CCRMA Introduction - (Nette, Carr, Fernando). <br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")<br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics (Part 1)<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
<br />
<br><u>Day 1: Part 2</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture2.pdf Lecture 2 Slides]<br />
<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (Spectral statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
<br />
MFCCs Sonified<br><br />
Original track ("Chewing Gum"): [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694] <br><br />
MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/] <br><br />
<br />
<br />
<br><u>Lab 1:</u> <br><br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
<br />
* [http://nbviewer.ipython.org/github/stevetjoa/stanford-mir/blob/master/Table_of_Contents.ipynb Lab 1 - Basic Feature Extraction and Classification] <br><br />
<br />
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code><br />
** To receive an up-to-date version of the repository, from your repository folder: <code>git pull</code><br />
<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture3.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([https://discover.izotope.com/ Rhythmic Similarity])<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_pitch.pdf Lecture 4 Slides]<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
CCRMA Tour<br />
<br />
'''Lab 2:''' <br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
* See [https://github.com/stevetjoa/ccrma/blob/master/odf_of_file.m Onset Detection Function example] within the MIR matlab codebase in Octave/Matlab.<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
Demo: iZotope Discover (Sound Similarity Search, jay) [http://www.izotope.com/tech/cloud/mediamined.asp Video] <br />
<br />
Guest Lecture: Stephen Pope (SndsLike, BirdGenie)<br />
[https://ccrma.stanford.edu/workshops/mir2014/MAT_MIR4-update.pdf MAT_MIR4-update slides]<br />
[https://ccrma.stanford.edu/workshops/mir2014/BirdsEar.pdf BirdGenie Slides]<br />
[https://ccrma.stanford.edu/workshops/mir2014/SndsLike.pdf SndsLike Slides]<br />
<br />
Lecture 5: Classification: Unsupervised vs. Supervised, k-means, GMM, SVM - Steve [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_ML.pdf Lecture 5 Slides]<br />
<br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
<br />
Matlab code for key estimation, chord recognition: <br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
Lecture 6: Steve Tjoa, [http://ccrma.stanford.edu/workshops/mir2013/ccrma20130627.pdf Lecture 6 Slides]<br />
<br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Guest Lecture 7: Andreas Ehmann, MIREX <br><br />
<br />
Lecture 8: Evaluation Metrics for Information Retrieval - Leigh Smith [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_IR.pdf Slides]<br />
<br />
<br />
'''Lab 4'''<br />
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Day 5: Deep Belief Networks and Wavelets ===<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
Afternoon: CCRMA Lawn BBQ<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/<br />
<br />
[[MIR_workshop_2014]]<br />
<br />
=== Bonus Lab Material from Previous Years (Matlab) ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18229MIR workshop 20152015-07-10T19:14:20Z<p>Kiemyang: </p>
<hr />
<div>''' Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval '''<br />
<br />
== Logistics ==<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop Structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule: Lectures & Labs ==<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis and Feature Extraction ===<br />
<br />
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture1.pdf Lecture 1 Slides]<br />
<br />
* Introductions <br />
* CCRMA Introduction - (Nette, Carr, Fernando). <br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")<br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics (Part 1)<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
<br />
<br><u>Day 1: Part 2</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture2.pdf Lecture 2 Slides]<br />
<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (Spectral statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
<br />
MFCCs Sonified<br><br />
Original track ("Chewing Gum"): [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694] <br><br />
MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/] <br><br />
<br />
<br />
<br><u>Lab 1:</u> <br><br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
<br />
* [http://nbviewer.ipython.org/github/stevetjoa/stanford-mir/blob/master/Table_of_Contents.ipynb Lab 1 - Basic Feature Extraction and Classification] <br><br />
<br />
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code><br />
** To receive an up-to-date version of the repository, from your repository folder: <code>git pull</code><br />
<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture3.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([https://discover.izotope.com/ Rhythmic Similarity])<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_pitch.pdf Lecture 4 Slides]<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
CCRMA Tour<br />
<br />
'''Lab 2:''' <br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
* See [https://github.com/stevetjoa/ccrma/blob/master/odf_of_file.m Onset Detection Function example] within the MIR matlab codebase in Octave/Matlab.<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
Demo: iZotope Discover (Sound Similarity Search, jay) [http://www.izotope.com/tech/cloud/mediamined.asp Video] <br />
<br />
Guest Lecture: Stephen Pope (SndsLike, BirdGenie)<br />
[https://ccrma.stanford.edu/workshops/mir2014/MAT_MIR4-update.pdf MAT_MIR4-update slides]<br />
[https://ccrma.stanford.edu/workshops/mir2014/BirdsEar.pdf BirdGenie Slides]<br />
[https://ccrma.stanford.edu/workshops/mir2014/SndsLike.pdf SndsLike Slides]<br />
<br />
Lecture 5: Classification: Unsupervised vs. Supervised, k-means, GMM, SVM - Steve [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_ML.pdf Lecture 5 Slides]<br />
<br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
<br />
Matlab code for key estimation, chord recognition: <br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
Lecture 6: Steve Tjoa, [http://ccrma.stanford.edu/workshops/mir2013/ccrma20130627.pdf Lecture 6 Slides]<br />
<br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Guest Lecture 7: Andreas Ehmann, MIREX <br><br />
<br />
Lecture 8: Evaluation Metrics for Information Retrieval - Leigh Smith [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_IR.pdf Slides]<br />
<br />
<br />
'''Lab 4'''<br />
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Day 5: Deep Belief Networks and Wavelets ===<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
Afternoon: CCRMA Lawn BBQ<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/<br />
<br />
[[MIR_workshop_2014]]<br />
<br />
=== Bonus Lab Material from Previous Years (Matlab) ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18228MIR workshop 20152015-07-10T19:13:24Z<p>Kiemyang: </p>
<hr />
<div>= Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval =<br />
<br />
== Logistics ==<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop Structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule: Lectures & Labs ==<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis and Feature Extraction ===<br />
<br />
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture1.pdf Lecture 1 Slides]<br />
<br />
* Introductions <br />
* CCRMA Introduction - (Nette, Carr, Fernando). <br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")<br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics (Part 1)<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
<br />
<br><u>Day 1: Part 2</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture2.pdf Lecture 2 Slides]<br />
<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (Spectral statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
<br />
MFCCs Sonified<br><br />
Original track ("Chewing Gum"): [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694] <br><br />
MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/] <br><br />
<br />
<br />
<br><u>Lab 1:</u> <br><br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
<br />
* [http://nbviewer.ipython.org/github/stevetjoa/stanford-mir/blob/master/Table_of_Contents.ipynb Lab 1 - Basic Feature Extraction and Classification] <br><br />
<br />
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code><br />
** To receive an up-to-date version of the repository, from your repository folder: <code>git pull</code><br />
<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture3.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([https://discover.izotope.com/ Rhythmic Similarity])<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_pitch.pdf Lecture 4 Slides]<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
CCRMA Tour<br />
<br />
'''Lab 2:''' <br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
* See [https://github.com/stevetjoa/ccrma/blob/master/odf_of_file.m Onset Detection Function example] within the MIR matlab codebase in Octave/Matlab.<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
Demo: iZotope Discover (Sound Similarity Search, jay) [http://www.izotope.com/tech/cloud/mediamined.asp Video] <br />
<br />
Guest Lecture: Stephen Pope (SndsLike, BirdGenie)<br />
[https://ccrma.stanford.edu/workshops/mir2014/MAT_MIR4-update.pdf MAT_MIR4-update slides]<br />
[https://ccrma.stanford.edu/workshops/mir2014/BirdsEar.pdf BirdGenie Slides]<br />
[https://ccrma.stanford.edu/workshops/mir2014/SndsLike.pdf SndsLike Slides]<br />
<br />
Lecture 5: Classification: Unsupervised vs. Supervised, k-means, GMM, SVM - Steve [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_ML.pdf Lecture 5 Slides]<br />
<br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
<br />
Matlab code for key estimation, chord recognition: <br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
Lecture 6: Steve Tjoa, [http://ccrma.stanford.edu/workshops/mir2013/ccrma20130627.pdf Lecture 6 Slides]<br />
<br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Guest Lecture 7: Andreas Ehmann, MIREX <br><br />
<br />
Lecture 8: Evaluation Metrics for Information Retrieval - Leigh Smith [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_IR.pdf Slides]<br />
<br />
<br />
'''Lab 4'''<br />
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Day 5: Deep Belief Networks and Wavelets ===<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
Afternoon: CCRMA Lawn BBQ<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/<br />
<br />
[[MIR_workshop_2014]]<br />
<br />
=== Bonus Lab Material from Previous Years (Matlab) ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18227MIR workshop 20152015-07-10T18:09:44Z<p>Kiemyang: /* Day 1: Introduction to MIR, Signal Analysis and Feature Extraction */</p>
<hr />
<div>Under construction.<br />
<br />
<b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
<br />
== Logistics ==<br />
Workshop Title: '''Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval'''<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop Structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule: Lectures & Labs ==<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis and Feature Extraction ===<br />
<br />
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture1.pdf Lecture 1 Slides]<br />
<br />
* Introductions <br />
* CCRMA Introduction - (Nette, Carr, Fernando). <br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")<br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics (Part 1)<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
<br />
<br><u>Day 1: Part 2</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture2.pdf Lecture 2 Slides]<br />
<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (Spectral statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
<br />
MFCCs Sonified<br><br />
Original track ("Chewing Gum"): [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694] <br><br />
MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/] <br><br />
<br />
<br />
<br><u>Lab 1:</u> <br><br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
<br />
* [http://nbviewer.ipython.org/github/stevetjoa/stanford-mir/blob/master/Table_of_Contents.ipynb Lab 1 - Basic Feature Extraction and Classification] <br><br />
<br />
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code><br />
** To receive an up-to-date version of the repository, from your repository folder: <code>git pull</code><br />
<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture3.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([https://discover.izotope.com/ Rhythmic Similarity])<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_pitch.pdf Lecture 4 Slides]<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
CCRMA Tour<br />
<br />
'''Lab 2:''' <br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
* See [https://github.com/stevetjoa/ccrma/blob/master/odf_of_file.m Onset Detection Function example] within the MIR matlab codebase in Octave/Matlab.<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
Demo: iZotope Discover (Sound Similarity Search, jay) [http://www.izotope.com/tech/cloud/mediamined.asp Video] <br />
<br />
Guest Lecture: Stephen Pope (SndsLike, BirdGenie)<br />
[https://ccrma.stanford.edu/workshops/mir2014/MAT_MIR4-update.pdf MAT_MIR4-update slides]<br />
[https://ccrma.stanford.edu/workshops/mir2014/BirdsEar.pdf BirdGenie Slides]<br />
[https://ccrma.stanford.edu/workshops/mir2014/SndsLike.pdf SndsLike Slides]<br />
<br />
Lecture 5: Classification: Unsupervised vs. Supervised, k-means, GMM, SVM - Steve [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_ML.pdf Lecture 5 Slides]<br />
<br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
<br />
Matlab code for key estimation, chord recognition: <br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
Lecture 6: Steve Tjoa, [http://ccrma.stanford.edu/workshops/mir2013/ccrma20130627.pdf Lecture 6 Slides]<br />
<br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Guest Lecture 7: Andreas Ehmann, MIREX <br><br />
<br />
Lecture 8: Evaluation Metrics for Information Retrieval - Leigh Smith [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_IR.pdf Slides]<br />
<br />
<br />
'''Lab 4'''<br />
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Day 5: Deep Belief Networks and Wavelets ===<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
Afternoon: CCRMA Lawn BBQ<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/<br />
<br />
[[MIR_workshop_2014]]<br />
<br />
=== Bonus Lab Material from Previous Years (Matlab) ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18226MIR workshop 20152015-07-10T18:08:12Z<p>Kiemyang: /* Abstract */</p>
<hr />
<div>Under construction.<br />
<br />
<b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
<br />
== Logistics ==<br />
Workshop Title: '''Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval'''<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop Structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule: Lectures & Labs ==<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis and Feature Extraction ===<br />
Presenters: Jay LeBoeuf, Leigh Smith<br />
<br />
'''Glossary of Terms to be used in this course <work in progress>'''<br />
<br />
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture1.pdf Lecture 1 Slides]<br />
<br />
* Introductions <br />
* CCRMA Introduction - (Nette, Carr, Fernando). <br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")<br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics (Part 1)<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
<br />
<br><u>Day 1: Part 2</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture2.pdf Lecture 2 Slides]<br />
<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (Spectral statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
<br />
MFCCs Sonified<br><br />
Original track ("Chewing Gum"): [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694] <br><br />
MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/] <br><br />
<br />
<br />
<br><u>Lab 1:</u> <br><br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
<br />
* [http://nbviewer.ipython.org/github/stevetjoa/stanford-mir/blob/master/Table_of_Contents.ipynb Lab 1 - Basic Feature Extraction and Classification] <br><br />
<br />
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code><br />
** To receive an up-to-date version of the repository, from your repository folder: <code>git pull</code><br />
<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture3.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([https://discover.izotope.com/ Rhythmic Similarity])<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_pitch.pdf Lecture 4 Slides]<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
CCRMA Tour<br />
<br />
'''Lab 2:''' <br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
* See [https://github.com/stevetjoa/ccrma/blob/master/odf_of_file.m Onset Detection Function example] within the MIR matlab codebase in Octave/Matlab.<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
Demo: iZotope Discover (Sound Similarity Search, jay) [http://www.izotope.com/tech/cloud/mediamined.asp Video] <br />
<br />
Guest Lecture: Stephen Pope (SndsLike, BirdGenie)<br />
[https://ccrma.stanford.edu/workshops/mir2014/MAT_MIR4-update.pdf MAT_MIR4-update slides]<br />
[https://ccrma.stanford.edu/workshops/mir2014/BirdsEar.pdf BirdGenie Slides]<br />
[https://ccrma.stanford.edu/workshops/mir2014/SndsLike.pdf SndsLike Slides]<br />
<br />
Lecture 5: Classification: Unsupervised vs. Supervised, k-means, GMM, SVM - Steve [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_ML.pdf Lecture 5 Slides]<br />
<br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
<br />
Matlab code for key estimation, chord recognition: <br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
Lecture 6: Steve Tjoa, [http://ccrma.stanford.edu/workshops/mir2013/ccrma20130627.pdf Lecture 6 Slides]<br />
<br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Guest Lecture 7: Andreas Ehmann, MIREX <br><br />
<br />
Lecture 8: Evaluation Metrics for Information Retrieval - Leigh Smith [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_IR.pdf Slides]<br />
<br />
<br />
'''Lab 4'''<br />
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Day 5: Deep Belief Networks and Wavelets ===<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
Afternoon: CCRMA Lawn BBQ<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/<br />
<br />
[[MIR_workshop_2014]]<br />
<br />
=== Bonus Lab Material from Previous Years (Matlab) ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18225MIR workshop 20152015-07-10T18:00:11Z<p>Kiemyang: /* Abstract */</p>
<hr />
<div>Under construction.<br />
<br />
<b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
<br />
== Logistics ==<br />
Workshop Title: '''Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval'''<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?<br />
<br />
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.<br />
<br />
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.<br />
<br />
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.<br />
<br />
== Schedule: Lectures & Labs ==<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis and Feature Extraction ===<br />
Presenters: Jay LeBoeuf, Leigh Smith<br />
<br />
'''Glossary of Terms to be used in this course <work in progress>'''<br />
<br />
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture1.pdf Lecture 1 Slides]<br />
<br />
* Introductions <br />
* CCRMA Introduction - (Nette, Carr, Fernando). <br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")<br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics (Part 1)<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
<br />
<br><u>Day 1: Part 2</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture2.pdf Lecture 2 Slides]<br />
<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (Spectral statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
<br />
MFCCs Sonified<br><br />
Original track ("Chewing Gum"): [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694] <br><br />
MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/] <br><br />
<br />
<br />
<br><u>Lab 1:</u> <br><br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
<br />
* [http://nbviewer.ipython.org/github/stevetjoa/stanford-mir/blob/master/Table_of_Contents.ipynb Lab 1 - Basic Feature Extraction and Classification] <br><br />
<br />
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code><br />
** To receive an up-to-date version of the repository, from your repository folder: <code>git pull</code><br />
<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture3.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([https://discover.izotope.com/ Rhythmic Similarity])<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_pitch.pdf Lecture 4 Slides]<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
CCRMA Tour<br />
<br />
'''Lab 2:''' <br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
* See [https://github.com/stevetjoa/ccrma/blob/master/odf_of_file.m Onset Detection Function example] within the MIR matlab codebase in Octave/Matlab.<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
Demo: iZotope Discover (Sound Similarity Search, jay) [http://www.izotope.com/tech/cloud/mediamined.asp Video] <br />
<br />
Guest Lecture: Stephen Pope (SndsLike, BirdGenie)<br />
[https://ccrma.stanford.edu/workshops/mir2014/MAT_MIR4-update.pdf MAT_MIR4-update slides]<br />
[https://ccrma.stanford.edu/workshops/mir2014/BirdsEar.pdf BirdGenie Slides]<br />
[https://ccrma.stanford.edu/workshops/mir2014/SndsLike.pdf SndsLike Slides]<br />
<br />
Lecture 5: Classification: Unsupervised vs. Supervised, k-means, GMM, SVM - Steve [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_ML.pdf Lecture 5 Slides]<br />
<br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
<br />
Matlab code for key estimation, chord recognition: <br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
Lecture 6: Steve Tjoa, [http://ccrma.stanford.edu/workshops/mir2013/ccrma20130627.pdf Lecture 6 Slides]<br />
<br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Guest Lecture 7: Andreas Ehmann, MIREX <br><br />
<br />
Lecture 8: Evaluation Metrics for Information Retrieval - Leigh Smith [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_IR.pdf Slides]<br />
<br />
<br />
'''Lab 4'''<br />
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Day 5: Deep Belief Networks and Wavelets ===<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
Afternoon: CCRMA Lawn BBQ<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/<br />
<br />
[[MIR_workshop_2014]]<br />
<br />
=== Bonus Lab Material from Previous Years (Matlab) ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18224MIR workshop 20152015-07-10T17:50:03Z<p>Kiemyang: /* Logistics */</p>
<hr />
<div>Under construction.<br />
<br />
<b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
<br />
== Logistics ==<br />
Workshop Title: '''Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval'''<br />
* Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [https://stevetjoa.com Steve Tjoa]<br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.realindustry.com Real Industry.],<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing?<br />
This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.<br />
<br />
MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.<br />
<br />
This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.<br />
<br />
== Schedule: Lectures & Labs ==<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis and Feature Extraction ===<br />
Presenters: Jay LeBoeuf, Leigh Smith<br />
<br />
'''Glossary of Terms to be used in this course <work in progress>'''<br />
<br />
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture1.pdf Lecture 1 Slides]<br />
<br />
* Introductions <br />
* CCRMA Introduction - (Nette, Carr, Fernando). <br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")<br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics (Part 1)<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
<br />
<br><u>Day 1: Part 2</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture2.pdf Lecture 2 Slides]<br />
<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (Spectral statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
<br />
MFCCs Sonified<br><br />
Original track ("Chewing Gum"): [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694] <br><br />
MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/] <br><br />
<br />
<br />
<br><u>Lab 1:</u> <br><br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
<br />
* [http://nbviewer.ipython.org/github/stevetjoa/stanford-mir/blob/master/Table_of_Contents.ipynb Lab 1 - Basic Feature Extraction and Classification] <br><br />
<br />
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code><br />
** To receive an up-to-date version of the repository, from your repository folder: <code>git pull</code><br />
<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture3.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([https://discover.izotope.com/ Rhythmic Similarity])<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_pitch.pdf Lecture 4 Slides]<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
CCRMA Tour<br />
<br />
'''Lab 2:''' <br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
* See [https://github.com/stevetjoa/ccrma/blob/master/odf_of_file.m Onset Detection Function example] within the MIR matlab codebase in Octave/Matlab.<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
Demo: iZotope Discover (Sound Similarity Search, jay) [http://www.izotope.com/tech/cloud/mediamined.asp Video] <br />
<br />
Guest Lecture: Stephen Pope (SndsLike, BirdGenie)<br />
[https://ccrma.stanford.edu/workshops/mir2014/MAT_MIR4-update.pdf MAT_MIR4-update slides]<br />
[https://ccrma.stanford.edu/workshops/mir2014/BirdsEar.pdf BirdGenie Slides]<br />
[https://ccrma.stanford.edu/workshops/mir2014/SndsLike.pdf SndsLike Slides]<br />
<br />
Lecture 5: Classification: Unsupervised vs. Supervised, k-means, GMM, SVM - Steve [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_ML.pdf Lecture 5 Slides]<br />
<br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
<br />
Matlab code for key estimation, chord recognition: <br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
Lecture 6: Steve Tjoa, [http://ccrma.stanford.edu/workshops/mir2013/ccrma20130627.pdf Lecture 6 Slides]<br />
<br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Guest Lecture 7: Andreas Ehmann, MIREX <br><br />
<br />
Lecture 8: Evaluation Metrics for Information Retrieval - Leigh Smith [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_IR.pdf Slides]<br />
<br />
<br />
'''Lab 4'''<br />
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Day 5: Deep Belief Networks and Wavelets ===<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
Afternoon: CCRMA Lawn BBQ<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2014 CCRMA MIR Summer Workshop 2014]<br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/<br />
<br />
[[MIR_workshop_2014]]<br />
<br />
=== Bonus Lab Material from Previous Years (Matlab) ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18220MIR workshop 20152015-07-06T21:07:26Z<p>Kiemyang: </p>
<hr />
<div>Under construction.<br />
<br />
<b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
<br />
== Logistics ==<br />
Workshop Title: '''Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval'''<br />
* Monday, June 23, through Friday, June 27, 2014. 9:30 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.izotope.com iZotope, Inc.], <br />
** [http://stevetjoa.com Steve Tjoa]<br />
** [http://www.leighsmith.com/Research Leigh Smith]<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing?<br />
This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.<br />
<br />
MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.<br />
<br />
This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.<br />
<br />
== Schedule: Lectures & Labs ==<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis and Feature Extraction ===<br />
Presenters: Jay LeBoeuf, Leigh Smith<br />
<br />
'''Glossary of Terms to be used in this course <work in progress>'''<br />
<br />
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture1.pdf Lecture 1 Slides]<br />
<br />
* Introductions <br />
* CCRMA Introduction - (Nette, Carr, Fernando). <br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")<br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics (Part 1)<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
<br />
<br><u>Day 1: Part 2</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture2.pdf Lecture 2 Slides]<br />
<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (Spectral statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
<br />
MFCCs Sonified<br><br />
Original track ("Chewing Gum"): [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694] <br><br />
MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/] <br><br />
<br />
<br />
<br><u>Lab 1:</u> <br><br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
<br />
* [http://nbviewer.ipython.org/github/stevetjoa/stanford-mir/blob/master/Table_of_Contents.ipynb Lab 1 - Basic Feature Extraction and Classification] <br><br />
<br />
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code><br />
** To receive an up-to-date version of the repository, from your repository folder: <code>git pull</code><br />
<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture3.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([https://discover.izotope.com/ Rhythmic Similarity])<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_pitch.pdf Lecture 4 Slides]<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
CCRMA Tour<br />
<br />
'''Lab 2:''' <br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
* See [https://github.com/stevetjoa/ccrma/blob/master/odf_of_file.m Onset Detection Function example] within the MIR matlab codebase in Octave/Matlab.<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
Demo: iZotope Discover (Sound Similarity Search, jay) [http://www.izotope.com/tech/cloud/mediamined.asp Video] <br />
<br />
Guest Lecture: Stephen Pope (SndsLike, BirdGenie)<br />
[https://ccrma.stanford.edu/workshops/mir2014/MAT_MIR4-update.pdf MAT_MIR4-update slides]<br />
[https://ccrma.stanford.edu/workshops/mir2014/BirdsEar.pdf BirdGenie Slides]<br />
[https://ccrma.stanford.edu/workshops/mir2014/SndsLike.pdf SndsLike Slides]<br />
<br />
Lecture 5: Classification: Unsupervised vs. Supervised, k-means, GMM, SVM - Steve [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_ML.pdf Lecture 5 Slides]<br />
<br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
<br />
Matlab code for key estimation, chord recognition: <br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
Lecture 6: Steve Tjoa, [http://ccrma.stanford.edu/workshops/mir2013/ccrma20130627.pdf Lecture 6 Slides]<br />
<br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Guest Lecture 7: Andreas Ehmann, MIREX <br><br />
<br />
Lecture 8: Evaluation Metrics for Information Retrieval - Leigh Smith [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_IR.pdf Slides]<br />
<br />
<br />
'''Lab 4'''<br />
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Day 5: Deep Belief Networks and Wavelets ===<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
Afternoon: CCRMA Lawn BBQ<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/<br />
<br />
[[MIR_workshop_2014]]<br />
<br />
=== Bonus Lab Material from Previous Years (Matlab) ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2015&diff=18219MIR workshop 20152015-07-06T21:06:44Z<p>Kiemyang: initial commit</p>
<hr />
<div><b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
<br />
== Logistics ==<br />
Workshop Title: '''Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval'''<br />
* Monday, June 23, through Friday, June 27, 2014. 9:30 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.izotope.com iZotope, Inc.], <br />
** [http://stevetjoa.com Steve Tjoa]<br />
** [http://www.leighsmith.com/Research Leigh Smith]<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing?<br />
This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.<br />
<br />
MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.<br />
<br />
This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.<br />
<br />
== Schedule: Lectures & Labs ==<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis and Feature Extraction ===<br />
Presenters: Jay LeBoeuf, Leigh Smith<br />
<br />
'''Glossary of Terms to be used in this course <work in progress>'''<br />
<br />
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture1.pdf Lecture 1 Slides]<br />
<br />
* Introductions <br />
* CCRMA Introduction - (Nette, Carr, Fernando). <br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")<br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics (Part 1)<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
<br />
<br><u>Day 1: Part 2</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture2.pdf Lecture 2 Slides]<br />
<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (Spectral statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
<br />
MFCCs Sonified<br><br />
Original track ("Chewing Gum"): [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694] <br><br />
MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/] <br><br />
<br />
<br />
<br><u>Lab 1:</u> <br><br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
<br />
* [http://nbviewer.ipython.org/github/stevetjoa/stanford-mir/blob/master/Table_of_Contents.ipynb Lab 1 - Basic Feature Extraction and Classification] <br><br />
<br />
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code><br />
** To receive an up-to-date version of the repository, from your repository folder: <code>git pull</code><br />
<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture3.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([https://discover.izotope.com/ Rhythmic Similarity])<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_pitch.pdf Lecture 4 Slides]<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
CCRMA Tour<br />
<br />
'''Lab 2:''' <br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
* See [https://github.com/stevetjoa/ccrma/blob/master/odf_of_file.m Onset Detection Function example] within the MIR matlab codebase in Octave/Matlab.<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
Demo: iZotope Discover (Sound Similarity Search, jay) [http://www.izotope.com/tech/cloud/mediamined.asp Video] <br />
<br />
Guest Lecture: Stephen Pope (SndsLike, BirdGenie)<br />
[https://ccrma.stanford.edu/workshops/mir2014/MAT_MIR4-update.pdf MAT_MIR4-update slides]<br />
[https://ccrma.stanford.edu/workshops/mir2014/BirdsEar.pdf BirdGenie Slides]<br />
[https://ccrma.stanford.edu/workshops/mir2014/SndsLike.pdf SndsLike Slides]<br />
<br />
Lecture 5: Classification: Unsupervised vs. Supervised, k-means, GMM, SVM - Steve [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_ML.pdf Lecture 5 Slides]<br />
<br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
<br />
Matlab code for key estimation, chord recognition: <br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
Lecture 6: Steve Tjoa, [http://ccrma.stanford.edu/workshops/mir2013/ccrma20130627.pdf Lecture 6 Slides]<br />
<br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Guest Lecture 7: Andreas Ehmann, MIREX <br><br />
<br />
Lecture 8: Evaluation Metrics for Information Retrieval - Leigh Smith [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_IR.pdf Slides]<br />
<br />
<br />
'''Lab 4'''<br />
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Day 5: Deep Belief Networks and Wavelets ===<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
Afternoon: CCRMA Lawn BBQ<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/<br />
<br />
[[MIR_workshop_2014]]<br />
<br />
=== Bonus Lab Material from Previous Years (Matlab) ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2014&diff=16642MIR workshop 20142014-07-02T20:49:15Z<p>Kiemyang: /* Day 3: Machine Learning, Clustering and Classification */</p>
<hr />
<div><b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
<br />
== Logistics ==<br />
Workshop Title: '''Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval'''<br />
* Monday, June 23, through Friday, June 27, 2014. 9:30 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.izotope.com iZotope, Inc.], <br />
** [http://stevetjoa.com Steve Tjoa]<br />
** [http://www.leighsmith.com/Research Leigh Smith]<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing?<br />
This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.<br />
<br />
MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.<br />
<br />
This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.<br />
<br />
== Schedule: Lectures & Labs ==<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis and Feature Extraction ===<br />
Presenters: Jay LeBoeuf, Leigh Smith<br />
<br />
'''Glossary of Terms to be used in this course <work in progress>'''<br />
<br />
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture1.pdf Lecture 1 Slides]<br />
<br />
* Introductions <br />
* CCRMA Introduction - (Nette, Carr, Fernando). <br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")<br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics (Part 1)<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
<br />
<br><u>Day 1: Part 2</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture2.pdf Lecture 2 Slides]<br />
<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (Spectral statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
<br />
MFCCs Sonified<br><br />
Original track ("Chewing Gum"): [https://myspace.com/anniemusic/music/song/chewing-gum-28101163-14694] <br><br />
MFCCs only [http://www.cs.princeton.edu/~mdhoffma/icmc2008/] <br><br />
<br />
<br />
<br><u>Lab 1:</u> <br><br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
<br />
* [http://nbviewer.ipython.org/github/stevetjoa/stanford-mir/blob/master/Table_of_Contents.ipynb Lab 1 - Basic Feature Extraction and Classification] <br><br />
<br />
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code><br />
** To receive an up-to-date version of the repository, from your repository folder: <code>git pull</code><br />
<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Lecture3.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([https://discover.izotope.com/ Rhythmic Similarity])<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_pitch.pdf Lecture 4 Slides]<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
CCRMA Tour<br />
<br />
'''Lab 2:''' <br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
* See [https://github.com/stevetjoa/ccrma/blob/master/odf_of_file.m Onset Detection Function example] within the MIR matlab codebase in Octave/Matlab.<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
Demo: iZotope Discover (Sound Similarity Search, jay) [http://www.izotope.com/tech/cloud/mediamined.asp Video] <br />
<br />
Guest Lecture: Stephen Pope (SndsLike, BirdGenie)<br />
[https://ccrma.stanford.edu/workshops/mir2014/MAT_MIR4-update.pdf MAT_MIR4-update slides]<br />
[https://ccrma.stanford.edu/workshops/mir2014/BirdsEar.pdf BirdGenie Slides]<br />
[https://ccrma.stanford.edu/workshops/mir2014/SndsLike.pdf SndsLike Slides]<br />
<br />
Lecture 5: Classification: Unsupervised vs. Supervised, k-means, GMM, SVM - Steve [http://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_ML.pdf Lecture 5 Slides]<br />
<br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
<br />
Matlab code for key estimation, chord recognition: <br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
Lecture 6: Steve Tjoa, [http://ccrma.stanford.edu/workshops/mir2013/ccrma20130627.pdf Lecture 6 Slides]<br />
<br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Guest Lecture 7: Andreas Ehmann, MIREX <br><br />
<br />
Lecture 8: Evaluation Metrics for Information Retrieval - Leigh Smith [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_IR.pdf Slides]<br />
<br />
<br />
'''Lab 4'''<br />
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Day 5: Deep Belief Networks and Wavelets ===<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
Lecture 11: Leigh Smith, An Introduction to Wavelets [https://ccrma.stanford.edu/workshops/mir2014/CCRMA_MIR2014_Wavelets.pdf Slides]<br />
<br />
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]<br />
<br />
Lunch at [http://en.wikipedia.org/wiki/Homebrew_Computer_Club The Oasis]<br />
<br />
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9<br />
<br />
Afternoon: CCRMA Lawn BBQ<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [https://ccrma.stanford.edu/wiki/MIR_workshop_2013 CCRMA MIR Summer Workshop 2013]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2012 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/<br />
<br />
[[MIR_workshop_2014]]<br />
<br />
=== Bonus Lab Material from Previous Years (Matlab) ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2013&diff=15144MIR workshop 20132013-07-08T19:55:47Z<p>Kiemyang: /* Logistics */</p>
<hr />
<div><b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
== Logistics ==<br />
Workshop Title: '''Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval'''<br />
* Monday, June 24, through Friday, June 28, 2013. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.izotope.com iZotope, Inc.], <br />
** [http://stevetjoa.com Steve Tjoa]<br />
** Leigh Smith, [http://www.izotope.com iZotope, Inc.]<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing?<br />
This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.<br />
<br />
MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.<br />
<br />
This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.<br />
<br />
== Schedule: Lectures & Labs ==<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis and Feature Extraction ===<br />
Presenters: Jay LeBoeuf, Leigh Smith<br />
<br />
'''Glossary of Terms to be used in this course <work in progress>'''<br />
<br />
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture1.pdf Lecture 1 Slides]<br />
<br />
* Introductions <br />
* CCRMA Introduction - (Carr/Sasha). <br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Features: ZCR, Spectral moments; Scaling of feature data <br />
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")<br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics (Part 1)<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
<br />
<br><u>Day 1: Part 2</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture2.pdf Lecture 2 Slides]<br />
<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications (Historical: http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.1999.356)<br />
* MIR Application Design<br />
** Audio input, analysis<br />
** Statistical/perceptual processing<br />
** Data storage<br />
** Post-processing<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
* Other Feature domains<br />
** Wavelets, LPC<br />
<br />
<br><u>Lab 1:</u> <br><br />
<br />
*Matlab Introduction.<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab0/lab0.html Fundamentals of Matlab]<br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2013/Lab%201%20-%20Basic%20feature%20extraction%20and%20classification%20%282013%29.htm HTML Lab 1 - Basic Feature Extraction and Classification] <br><br />
<br />
<br />
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code><br />
** To receive an up-to-date version of the repository, from your repository folder: <code>git pull</code><br />
<br />
Students who need a personal tutorial of Matlab or audio signal processing will split off and received small group assistance to bring them up to speed.<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1.html Fundamentals of Digital Audio Signal Processing (FFT, STFT, Windowing, Zero-padding, 2-D Time-frequency representation)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture3.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([https://discover.izotope.com/ Rhythmic Similarity])<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_pitch.pdf Lecture 4 Slides]<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab 2:''' <br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
* See [https://github.com/stevetjoa/ccrma/blob/master/odf_of_file.m Onset Detection Function example] within the MIR matlab codebase in Octave/Matlab.<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM - Steve [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_ML.pdf Lecture 5 Slides]<br />
<br />
Demo: iZotope Discover (Sound Similarity Search) [http://www.izotope.com/tech/cloud/mediamined.asp Video] <br />
<br />
Guest Lecture 6: Ching-Wei Chen, Gracenote<br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
<br />
Matlab code for key estimation, chord recognition: <br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
Lecture 7: Steve Tjoa, [http://ccrma.stanford.edu/workshops/mir2013/ccrma20130627.pdf Lecture 7 Slides]<br />
<br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Guest Lecture 8: Nick Bryan, Gautham Mysore<br />
<br />
Nick & Gautham's latest publications: <br><br />
https://ccrma.stanford.edu/~gautham/Site/Publications.html<br><br />
https://ccrma.stanford.edu/~njb/<br />
<br />
Nick Mini Course of Source Separation: <br><br />
https://ccrma.stanford.edu/~njb/teaching/sstutorial/<br />
<br />
Itakura-Saito Divergence: [http://www.researchgate.net/publication/23250940_Nonnegative_matrix_factorization_with_the_Itakura-Saito_divergence_with_application_to_music_analysis/file/32bfe50fb2aa75bd93.pdf PDF]<br />
<br />
'''Lab 4'''<br />
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
=== Day 5: Information Retrieval Metrics, Evaluation, Real World Considerations ===<br />
<br />
Lecture 9: Leigh Smith, Evaluation Metrics for Information Retrieval [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_IR.pdf Slides]<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Bonus Lab material ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2013&diff=15136MIR workshop 20132013-06-29T20:36:10Z<p>Kiemyang: /* Day 5: Information Retrieval Metrics, Evaluation, Real World Considerations */</p>
<hr />
<div><b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
== Logistics ==<br />
Workshop Title: '''Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval'''<br />
* Monday, June 24, through Friday, June 28, 2013. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.izotope.com iZotope, Inc.], <br />
** [http://stevetjoa.com/, Steve Tjoa]<br />
** Leigh Smith, [http://www.izotope.com iZotope, Inc.]<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing?<br />
This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.<br />
<br />
MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.<br />
<br />
This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.<br />
<br />
== Schedule: Lectures & Labs ==<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis and Feature Extraction ===<br />
Presenters: Jay LeBoeuf, Leigh Smith<br />
<br />
'''Glossary of Terms to be used in this course <work in progress>'''<br />
<br />
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture1.pdf Lecture 1 Slides]<br />
<br />
* Introductions <br />
* CCRMA Introduction - (Carr/Sasha). <br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Features: ZCR, Spectral moments; Scaling of feature data <br />
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")<br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics (Part 1)<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
<br />
<br><u>Day 1: Part 2</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture2.pdf Lecture 2 Slides]<br />
<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications (Historical: http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.1999.356)<br />
* MIR Application Design<br />
** Audio input, analysis<br />
** Statistical/perceptual processing<br />
** Data storage<br />
** Post-processing<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
* Other Feature domains<br />
** Wavelets, LPC<br />
<br />
<br><u>Lab 1:</u> <br><br />
<br />
*Matlab Introduction.<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab0/lab0.html Fundamentals of Matlab]<br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2013/Lab%201%20-%20Basic%20feature%20extraction%20and%20classification%20%282013%29.htm HTML Lab 1 - Basic Feature Extraction and Classification] <br><br />
<br />
<br />
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code><br />
** To receive an up-to-date version of the repository, from your repository folder: <code>git pull</code><br />
<br />
Students who need a personal tutorial of Matlab or audio signal processing will split off and received small group assistance to bring them up to speed.<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1.html Fundamentals of Digital Audio Signal Processing (FFT, STFT, Windowing, Zero-padding, 2-D Time-frequency representation)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture3.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([https://discover.izotope.com/ Rhythmic Similarity])<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_pitch.pdf Lecture 4 Slides]<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab 2:''' <br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
* See [https://github.com/stevetjoa/ccrma/blob/master/odf_of_file.m Onset Detection Function example] within the MIR matlab codebase in Octave/Matlab.<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM - Steve [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_ML.pdf Lecture 5 Slides]<br />
<br />
Demo: iZotope Discover (Sound Similarity Search) [http://www.izotope.com/tech/cloud/mediamined.asp Video] <br />
<br />
Guest Lecture 6: Ching-Wei Chen, Gracenote<br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
<br />
Matlab code for key estimation, chord recognition: <br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
Lecture 7: Steve Tjoa, [http://ccrma.stanford.edu/workshops/mir2013/ccrma20130627.pdf Lecture 7 Slides]<br />
<br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Guest Lecture 8: Nick Bryan, Gautham Mysore<br />
<br />
Nick & Gautham's latest publications: <br><br />
https://ccrma.stanford.edu/~gautham/Site/Publications.html<br><br />
https://ccrma.stanford.edu/~njb/<br />
<br />
Nick Mini Course of Source Separation: <br><br />
https://ccrma.stanford.edu/~njb/teaching/sstutorial/<br />
<br />
Itakura-Saito Divergence: [http://www.researchgate.net/publication/23250940_Nonnegative_matrix_factorization_with_the_Itakura-Saito_divergence_with_application_to_music_analysis/file/32bfe50fb2aa75bd93.pdf PDF]<br />
<br />
'''Lab 4'''<br />
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
=== Day 5: Information Retrieval Metrics, Evaluation, Real World Considerations ===<br />
<br />
Lecture 9: Leigh Smith, Evaluation Metrics for Information Retrieval [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_IR.pdf Slides]<br />
<br />
Lecture 10: Steve Tjoa, Introduction to Deep Learning [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_DBN.pdf Slides]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Bonus Lab material ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2013&diff=15134MIR workshop 20132013-06-27T22:11:57Z<p>Kiemyang: /* Day 4: Music Information Retrieval in Polyphonic Mixtures */</p>
<hr />
<div><b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
== Logistics ==<br />
Workshop Title: '''Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval'''<br />
* Monday, June 24, through Friday, June 28, 2013. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.izotope.com iZotope, Inc.], <br />
** [http://stevetjoa.com/, Steve Tjoa]<br />
** Leigh Smith, [http://www.izotope.com iZotope, Inc.]<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing?<br />
This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.<br />
<br />
MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.<br />
<br />
This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.<br />
<br />
== Schedule: Lectures & Labs ==<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis and Feature Extraction ===<br />
Presenters: Jay LeBoeuf, Leigh Smith<br />
<br />
'''Glossary of Terms to be used in this course <work in progress>'''<br />
<br />
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture1.pdf Lecture 1 Slides]<br />
<br />
* Introductions <br />
* CCRMA Introduction - (Carr/Sasha). <br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Features: ZCR, Spectral moments; Scaling of feature data <br />
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")<br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics (Part 1)<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
<br />
<br><u>Day 1: Part 2</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture2.pdf Lecture 2 Slides]<br />
<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications (Historical: http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.1999.356)<br />
* MIR Application Design<br />
** Audio input, analysis<br />
** Statistical/perceptual processing<br />
** Data storage<br />
** Post-processing<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
* Other Feature domains<br />
** Wavelets, LPC<br />
<br />
<br><u>Lab 1:</u> <br><br />
<br />
*Matlab Introduction.<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab0/lab0.html Fundamentals of Matlab]<br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2013/Lab%201%20-%20Basic%20feature%20extraction%20and%20classification%20%282013%29.htm HTML Lab 1 - Basic Feature Extraction and Classification] <br><br />
<br />
<br />
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code><br />
** To receive an up-to-date version of the repository, from your repository folder: <code>git pull</code><br />
<br />
Students who need a personal tutorial of Matlab or audio signal processing will split off and received small group assistance to bring them up to speed.<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1.html Fundamentals of Digital Audio Signal Processing (FFT, STFT, Windowing, Zero-padding, 2-D Time-frequency representation)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture3.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([https://discover.izotope.com/ Rhythmic Similarity])<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_pitch.pdf Lecture 4 Slides]<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab 2:''' <br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
* See [https://github.com/stevetjoa/ccrma/blob/master/odf_of_file.m Onset Detection Function example] within the MIR matlab codebase in Octave/Matlab.<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM - Steve [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_ML.pdf Lecture 5 Slides]<br />
<br />
Demo: iZotope Discover (Sound Similarity Search) [http://www.izotope.com/tech/cloud/mediamined.asp Video] <br />
<br />
Guest Lecture 6: Ching-Wei Chen, Gracenote<br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
<br />
Matlab code for key estimation, chord recognition: <br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
<br />
Lecture 7: Steve Tjoa, [http://ccrma.stanford.edu/workshops/mir2013/ccrma20130627.pdf Lecture 7 Slides]<br />
<br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Guest Lecture 8: Nick Bryan, Gautham Mysore<br />
<br />
Nick & Gautham's latest publications: <br><br />
https://ccrma.stanford.edu/~gautham/Site/Publications.html<br><br />
https://ccrma.stanford.edu/~njb/<br />
<br />
Nick Mini Course of Source Separation: <br><br />
https://ccrma.stanford.edu/~njb/teaching/sstutorial/<br />
<br />
Itakura-Saito Divergence: [http://www.researchgate.net/publication/23250940_Nonnegative_matrix_factorization_with_the_Itakura-Saito_divergence_with_application_to_music_analysis/file/32bfe50fb2aa75bd93.pdf PDF]<br />
<br />
'''Lab 4'''<br />
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
=== Day 5: Information Retrieval Metrics, Evaluation, Real World Considerations ===<br />
Presenters: Leigh Smith<br />
<br />
* [https://ccrma.stanford.edu/workshops/mir2012/CCRMA%202012%20day1%20v5.pdf Day 5 Slides (.pdf)]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Bonus Lab material ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2013&diff=15133MIR workshop 20132013-06-27T22:11:10Z<p>Kiemyang: /* Day 4: Music Information Retrieval in Polyphonic Mixtures */</p>
<hr />
<div><b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
== Logistics ==<br />
Workshop Title: '''Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval'''<br />
* Monday, June 24, through Friday, June 28, 2013. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.izotope.com iZotope, Inc.], <br />
** [http://stevetjoa.com/, Steve Tjoa]<br />
** Leigh Smith, [http://www.izotope.com iZotope, Inc.]<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing?<br />
This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.<br />
<br />
MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.<br />
<br />
This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.<br />
<br />
== Schedule: Lectures & Labs ==<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis and Feature Extraction ===<br />
Presenters: Jay LeBoeuf, Leigh Smith<br />
<br />
'''Glossary of Terms to be used in this course <work in progress>'''<br />
<br />
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture1.pdf Lecture 1 Slides]<br />
<br />
* Introductions <br />
* CCRMA Introduction - (Carr/Sasha). <br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Features: ZCR, Spectral moments; Scaling of feature data <br />
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")<br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics (Part 1)<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
<br />
<br><u>Day 1: Part 2</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture2.pdf Lecture 2 Slides]<br />
<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications (Historical: http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.1999.356)<br />
* MIR Application Design<br />
** Audio input, analysis<br />
** Statistical/perceptual processing<br />
** Data storage<br />
** Post-processing<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
* Other Feature domains<br />
** Wavelets, LPC<br />
<br />
<br><u>Lab 1:</u> <br><br />
<br />
*Matlab Introduction.<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab0/lab0.html Fundamentals of Matlab]<br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2013/Lab%201%20-%20Basic%20feature%20extraction%20and%20classification%20%282013%29.htm HTML Lab 1 - Basic Feature Extraction and Classification] <br><br />
<br />
<br />
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code><br />
** To receive an up-to-date version of the repository, from your repository folder: <code>git pull</code><br />
<br />
Students who need a personal tutorial of Matlab or audio signal processing will split off and received small group assistance to bring them up to speed.<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1.html Fundamentals of Digital Audio Signal Processing (FFT, STFT, Windowing, Zero-padding, 2-D Time-frequency representation)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture3.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([https://discover.izotope.com/ Rhythmic Similarity])<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_pitch.pdf Lecture 4 Slides]<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab 2:''' <br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
* See [https://github.com/stevetjoa/ccrma/blob/master/odf_of_file.m Onset Detection Function example] within the MIR matlab codebase in Octave/Matlab.<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM - Steve [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_ML.pdf Lecture 5 Slides]<br />
<br />
Demo: iZotope Discover (Sound Similarity Search) [http://www.izotope.com/tech/cloud/mediamined.asp Video] <br />
<br />
Guest Lecture 6: Ching-Wei Chen, Gracenote<br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
<br />
Matlab code for key estimation, chord recognition: <br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
Presenter: Steve Tjoa<br />
<br />
<u>Day 4: Music Information Retrieval in Polyphonic Mixtures</u> [http://ccrma.stanford.edu/workshops/mir2013/ccrma20130627.pdf Lecture 7 Slides]<br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Guest Lecture 8: Nick Bryan, Gautham Mysore<br />
<br />
Nick & Gautham's latest publications: <br><br />
https://ccrma.stanford.edu/~gautham/Site/Publications.html<br><br />
https://ccrma.stanford.edu/~njb/<br />
<br />
Nick Mini Course of Source Separation: <br><br />
https://ccrma.stanford.edu/~njb/teaching/sstutorial/<br />
<br />
Itakura-Saito Divergence: [http://www.researchgate.net/publication/23250940_Nonnegative_matrix_factorization_with_the_Itakura-Saito_divergence_with_application_to_music_analysis/file/32bfe50fb2aa75bd93.pdf PDF]<br />
<br />
'''Lab 4'''<br />
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
=== Day 5: Information Retrieval Metrics, Evaluation, Real World Considerations ===<br />
Presenters: Leigh Smith<br />
<br />
* [https://ccrma.stanford.edu/workshops/mir2012/CCRMA%202012%20day1%20v5.pdf Day 5 Slides (.pdf)]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Bonus Lab material ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2013&diff=15132MIR workshop 20132013-06-27T21:51:11Z<p>Kiemyang: /* Day 5: Information Retrieval Metrics, Evaluation, Real World Considerations */</p>
<hr />
<div><b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
== Logistics ==<br />
Workshop Title: '''Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval'''<br />
* Monday, June 24, through Friday, June 28, 2013. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.izotope.com iZotope, Inc.], <br />
** [http://stevetjoa.com/, Steve Tjoa]<br />
** Leigh Smith, [http://www.izotope.com iZotope, Inc.]<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing?<br />
This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.<br />
<br />
MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.<br />
<br />
This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.<br />
<br />
== Schedule: Lectures & Labs ==<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis and Feature Extraction ===<br />
Presenters: Jay LeBoeuf, Leigh Smith<br />
<br />
'''Glossary of Terms to be used in this course <work in progress>'''<br />
<br />
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture1.pdf Lecture 1 Slides]<br />
<br />
* Introductions <br />
* CCRMA Introduction - (Carr/Sasha). <br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Features: ZCR, Spectral moments; Scaling of feature data <br />
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")<br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics (Part 1)<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
<br />
<br><u>Day 1: Part 2</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture2.pdf Lecture 2 Slides]<br />
<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications (Historical: http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.1999.356)<br />
* MIR Application Design<br />
** Audio input, analysis<br />
** Statistical/perceptual processing<br />
** Data storage<br />
** Post-processing<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
* Other Feature domains<br />
** Wavelets, LPC<br />
<br />
<br><u>Lab 1:</u> <br><br />
<br />
*Matlab Introduction.<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab0/lab0.html Fundamentals of Matlab]<br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2013/Lab%201%20-%20Basic%20feature%20extraction%20and%20classification%20%282013%29.htm HTML Lab 1 - Basic Feature Extraction and Classification] <br><br />
<br />
<br />
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code><br />
** To receive an up-to-date version of the repository, from your repository folder: <code>git pull</code><br />
<br />
Students who need a personal tutorial of Matlab or audio signal processing will split off and received small group assistance to bring them up to speed.<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1.html Fundamentals of Digital Audio Signal Processing (FFT, STFT, Windowing, Zero-padding, 2-D Time-frequency representation)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture3.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([https://discover.izotope.com/ Rhythmic Similarity])<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_pitch.pdf Lecture 4 Slides]<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab 2:''' <br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
* See [https://github.com/stevetjoa/ccrma/blob/master/odf_of_file.m Onset Detection Function example] within the MIR matlab codebase in Octave/Matlab.<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM - Steve [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_ML.pdf Lecture 5 Slides]<br />
<br />
Demo: iZotope Discover (Sound Similarity Search) [http://www.izotope.com/tech/cloud/mediamined.asp Video] <br />
<br />
Guest Lecture 6: Ching-Wei Chen, Gracenote<br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
<br />
Matlab code for key estimation, chord recognition: <br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
Presenter: Steve Tjoa<br />
<br />
<u>Day 4: Music Information Retrieval in Polyphonic Mixtures</u> [http://ccrma.stanford.edu/workshops/mir2013/ccrma20130627.pdf Lecture 7 Slides]<br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Guest Lecture 8: Nick Bryan, Gautham Mysore<br />
<br />
Nick & Gautham's latest publications: <br><br />
https://ccrma.stanford.edu/~gautham/Site/Publications.html<br><br />
https://ccrma.stanford.edu/~njb/<br />
<br />
Nick Mini Course of Source Separation: <br><br />
https://ccrma.stanford.edu/~njb/teaching/sstutorial/<br />
<br />
'''Lab 4'''<br />
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
=== Day 5: Information Retrieval Metrics, Evaluation, Real World Considerations ===<br />
Presenters: Leigh Smith<br />
<br />
* [https://ccrma.stanford.edu/workshops/mir2012/CCRMA%202012%20day1%20v5.pdf Day 5 Slides (.pdf)]<br />
<br />
References: <br />
* IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
=== Bonus Lab material ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2013&diff=15131MIR workshop 20132013-06-27T21:50:55Z<p>Kiemyang: /* Day 5: Information Retrieval Metrics, Evaluation, Real World Considerations */</p>
<hr />
<div><b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
== Logistics ==<br />
Workshop Title: '''Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval'''<br />
* Monday, June 24, through Friday, June 28, 2013. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.izotope.com iZotope, Inc.], <br />
** [http://stevetjoa.com/, Steve Tjoa]<br />
** Leigh Smith, [http://www.izotope.com iZotope, Inc.]<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing?<br />
This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.<br />
<br />
MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.<br />
<br />
This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.<br />
<br />
== Schedule: Lectures & Labs ==<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis and Feature Extraction ===<br />
Presenters: Jay LeBoeuf, Leigh Smith<br />
<br />
'''Glossary of Terms to be used in this course <work in progress>'''<br />
<br />
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture1.pdf Lecture 1 Slides]<br />
<br />
* Introductions <br />
* CCRMA Introduction - (Carr/Sasha). <br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Features: ZCR, Spectral moments; Scaling of feature data <br />
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")<br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics (Part 1)<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
<br />
<br><u>Day 1: Part 2</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture2.pdf Lecture 2 Slides]<br />
<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications (Historical: http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.1999.356)<br />
* MIR Application Design<br />
** Audio input, analysis<br />
** Statistical/perceptual processing<br />
** Data storage<br />
** Post-processing<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
* Other Feature domains<br />
** Wavelets, LPC<br />
<br />
<br><u>Lab 1:</u> <br><br />
<br />
*Matlab Introduction.<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab0/lab0.html Fundamentals of Matlab]<br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2013/Lab%201%20-%20Basic%20feature%20extraction%20and%20classification%20%282013%29.htm HTML Lab 1 - Basic Feature Extraction and Classification] <br><br />
<br />
<br />
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code><br />
** To receive an up-to-date version of the repository, from your repository folder: <code>git pull</code><br />
<br />
Students who need a personal tutorial of Matlab or audio signal processing will split off and received small group assistance to bring them up to speed.<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1.html Fundamentals of Digital Audio Signal Processing (FFT, STFT, Windowing, Zero-padding, 2-D Time-frequency representation)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture3.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([https://discover.izotope.com/ Rhythmic Similarity])<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_pitch.pdf Lecture 4 Slides]<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab 2:''' <br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
* See [https://github.com/stevetjoa/ccrma/blob/master/odf_of_file.m Onset Detection Function example] within the MIR matlab codebase in Octave/Matlab.<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM - Steve [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_ML.pdf Lecture 5 Slides]<br />
<br />
Demo: iZotope Discover (Sound Similarity Search) [http://www.izotope.com/tech/cloud/mediamined.asp Video] <br />
<br />
Guest Lecture 6: Ching-Wei Chen, Gracenote<br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
<br />
Matlab code for key estimation, chord recognition: <br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
Presenter: Steve Tjoa<br />
<br />
<u>Day 4: Music Information Retrieval in Polyphonic Mixtures</u> [http://ccrma.stanford.edu/workshops/mir2013/ccrma20130627.pdf Lecture 7 Slides]<br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Guest Lecture 8: Nick Bryan, Gautham Mysore<br />
<br />
Nick & Gautham's latest publications: <br><br />
https://ccrma.stanford.edu/~gautham/Site/Publications.html<br><br />
https://ccrma.stanford.edu/~njb/<br />
<br />
Nick Mini Course of Source Separation: <br><br />
https://ccrma.stanford.edu/~njb/teaching/sstutorial/<br />
<br />
'''Lab 4'''<br />
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
=== Day 5: Information Retrieval Metrics, Evaluation, Real World Considerations ===<br />
Presenters: Leigh Smith<br />
<br />
* [https://ccrma.stanford.edu/workshops/mir2012/CCRMA%202012%20day1%20v5.pdf Day 5 Slides (.pdf)]<br />
<br />
References: <br />
** IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
<br><br />
<br />
=== Bonus Lab material ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2013&diff=15130MIR workshop 20132013-06-27T20:40:26Z<p>Kiemyang: /* Day 4: Music Information Retrieval in Polyphonic Mixtures */</p>
<hr />
<div><b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
== Logistics ==<br />
Workshop Title: '''Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval'''<br />
* Monday, June 24, through Friday, June 28, 2013. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.izotope.com iZotope, Inc.], <br />
** [http://stevetjoa.com/, Steve Tjoa]<br />
** Leigh Smith, [http://www.izotope.com iZotope, Inc.]<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing?<br />
This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.<br />
<br />
MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.<br />
<br />
This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.<br />
<br />
== Schedule: Lectures & Labs ==<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis and Feature Extraction ===<br />
Presenters: Jay LeBoeuf, Leigh Smith<br />
<br />
'''Glossary of Terms to be used in this course <work in progress>'''<br />
<br />
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture1.pdf Lecture 1 Slides]<br />
<br />
* Introductions <br />
* CCRMA Introduction - (Carr/Sasha). <br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Features: ZCR, Spectral moments; Scaling of feature data <br />
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")<br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics (Part 1)<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
<br />
<br><u>Day 1: Part 2</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture2.pdf Lecture 2 Slides]<br />
<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications (Historical: http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.1999.356)<br />
* MIR Application Design<br />
** Audio input, analysis<br />
** Statistical/perceptual processing<br />
** Data storage<br />
** Post-processing<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
* Other Feature domains<br />
** Wavelets, LPC<br />
<br />
<br><u>Lab 1:</u> <br><br />
<br />
*Matlab Introduction.<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab0/lab0.html Fundamentals of Matlab]<br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2013/Lab%201%20-%20Basic%20feature%20extraction%20and%20classification%20%282013%29.htm HTML Lab 1 - Basic Feature Extraction and Classification] <br><br />
<br />
<br />
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code><br />
** To receive an up-to-date version of the repository, from your repository folder: <code>git pull</code><br />
<br />
Students who need a personal tutorial of Matlab or audio signal processing will split off and received small group assistance to bring them up to speed.<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1.html Fundamentals of Digital Audio Signal Processing (FFT, STFT, Windowing, Zero-padding, 2-D Time-frequency representation)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture3.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([https://discover.izotope.com/ Rhythmic Similarity])<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_pitch.pdf Lecture 4 Slides]<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab 2:''' <br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
* See [https://github.com/stevetjoa/ccrma/blob/master/odf_of_file.m Onset Detection Function example] within the MIR matlab codebase in Octave/Matlab.<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM - Steve [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_ML.pdf Lecture 5 Slides]<br />
<br />
Demo: iZotope Discover (Sound Similarity Search) [http://www.izotope.com/tech/cloud/mediamined.asp Video] <br />
<br />
Guest Lecture 6: Ching-Wei Chen, Gracenote<br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
<br />
Matlab code for key estimation, chord recognition: <br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
Presenter: Steve Tjoa<br />
<br />
<u>Day 4: Music Information Retrieval in Polyphonic Mixtures</u> [http://ccrma.stanford.edu/workshops/mir2013/ccrma20130627.pdf Lecture 7 Slides]<br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Guest Lecture 8: Nick Bryan, Gautham Mysore<br />
<br />
Nick & Gautham's latest publications: <br><br />
https://ccrma.stanford.edu/~gautham/Site/Publications.html<br><br />
https://ccrma.stanford.edu/~njb/<br />
<br />
Nick Mini Course of Source Separation: <br><br />
https://ccrma.stanford.edu/~njb/teaching/sstutorial/<br />
<br />
'''Lab 4'''<br />
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
=== Day 5: Information Retrieval Metrics, Evaluation, Real World Considerations ===<br />
Presenters: Leigh Smith<br />
<br />
* [https://ccrma.stanford.edu/workshops/mir2012/CCRMA%202012%20day1%20v5.pdf Day 5 Slides (.pdf)]<br />
<br />
References: <br />
** IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
<br />
'''Lab 5'''<br />
Chroma, Key estimation, and Chord recognition: <br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
<br><br />
<br />
=== Bonus Lab material ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2013&diff=15129MIR workshop 20132013-06-27T20:40:03Z<p>Kiemyang: /* Day 3: Machine Learning, Clustering and Classification */</p>
<hr />
<div><b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
== Logistics ==<br />
Workshop Title: '''Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval'''<br />
* Monday, June 24, through Friday, June 28, 2013. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.izotope.com iZotope, Inc.], <br />
** [http://stevetjoa.com/, Steve Tjoa]<br />
** Leigh Smith, [http://www.izotope.com iZotope, Inc.]<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing?<br />
This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.<br />
<br />
MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.<br />
<br />
This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.<br />
<br />
== Schedule: Lectures & Labs ==<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis and Feature Extraction ===<br />
Presenters: Jay LeBoeuf, Leigh Smith<br />
<br />
'''Glossary of Terms to be used in this course <work in progress>'''<br />
<br />
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture1.pdf Lecture 1 Slides]<br />
<br />
* Introductions <br />
* CCRMA Introduction - (Carr/Sasha). <br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Features: ZCR, Spectral moments; Scaling of feature data <br />
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")<br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics (Part 1)<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
<br />
<br><u>Day 1: Part 2</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture2.pdf Lecture 2 Slides]<br />
<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications (Historical: http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.1999.356)<br />
* MIR Application Design<br />
** Audio input, analysis<br />
** Statistical/perceptual processing<br />
** Data storage<br />
** Post-processing<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
* Other Feature domains<br />
** Wavelets, LPC<br />
<br />
<br><u>Lab 1:</u> <br><br />
<br />
*Matlab Introduction.<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab0/lab0.html Fundamentals of Matlab]<br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2013/Lab%201%20-%20Basic%20feature%20extraction%20and%20classification%20%282013%29.htm HTML Lab 1 - Basic Feature Extraction and Classification] <br><br />
<br />
<br />
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code><br />
** To receive an up-to-date version of the repository, from your repository folder: <code>git pull</code><br />
<br />
Students who need a personal tutorial of Matlab or audio signal processing will split off and received small group assistance to bring them up to speed.<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1.html Fundamentals of Digital Audio Signal Processing (FFT, STFT, Windowing, Zero-padding, 2-D Time-frequency representation)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture3.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([https://discover.izotope.com/ Rhythmic Similarity])<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_pitch.pdf Lecture 4 Slides]<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab 2:''' <br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
* See [https://github.com/stevetjoa/ccrma/blob/master/odf_of_file.m Onset Detection Function example] within the MIR matlab codebase in Octave/Matlab.<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM - Steve [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_ML.pdf Lecture 5 Slides]<br />
<br />
Demo: iZotope Discover (Sound Similarity Search) [http://www.izotope.com/tech/cloud/mediamined.asp Video] <br />
<br />
Guest Lecture 6: Ching-Wei Chen, Gracenote<br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
<br />
Matlab code for key estimation, chord recognition: <br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
Presenter: Steve Tjoa<br />
Guest Speaker: Nick Bryan, Gautham Mysore<br />
<br />
<u>Day 4: Music Information Retrieval in Polyphonic Mixtures</u> [http://ccrma.stanford.edu/workshops/mir2013/ccrma20130627.pdf Lecture 4 Slides]<br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Nick & Gautham's latest publications: <br><br />
https://ccrma.stanford.edu/~gautham/Site/Publications.html<br><br />
https://ccrma.stanford.edu/~njb/<br />
<br />
Nick Mini Course of Source Separation: <br><br />
https://ccrma.stanford.edu/~njb/teaching/sstutorial/<br />
<br />
'''Lab 4'''<br />
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
=== Day 5: Information Retrieval Metrics, Evaluation, Real World Considerations ===<br />
Presenters: Leigh Smith<br />
<br />
* [https://ccrma.stanford.edu/workshops/mir2012/CCRMA%202012%20day1%20v5.pdf Day 5 Slides (.pdf)]<br />
<br />
References: <br />
** IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
<br />
'''Lab 5'''<br />
Chroma, Key estimation, and Chord recognition: <br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
<br><br />
<br />
=== Bonus Lab material ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2013&diff=15128MIR workshop 20132013-06-27T20:39:04Z<p>Kiemyang: /* Day 3: Machine Learning, Clustering and Classification */</p>
<hr />
<div><b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
== Logistics ==<br />
Workshop Title: '''Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval'''<br />
* Monday, June 24, through Friday, June 28, 2013. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.izotope.com iZotope, Inc.], <br />
** [http://stevetjoa.com/, Steve Tjoa]<br />
** Leigh Smith, [http://www.izotope.com iZotope, Inc.]<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing?<br />
This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.<br />
<br />
MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.<br />
<br />
This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.<br />
<br />
== Schedule: Lectures & Labs ==<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis and Feature Extraction ===<br />
Presenters: Jay LeBoeuf, Leigh Smith<br />
<br />
'''Glossary of Terms to be used in this course <work in progress>'''<br />
<br />
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture1.pdf Lecture 1 Slides]<br />
<br />
* Introductions <br />
* CCRMA Introduction - (Carr/Sasha). <br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Features: ZCR, Spectral moments; Scaling of feature data <br />
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")<br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics (Part 1)<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
<br />
<br><u>Day 1: Part 2</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture2.pdf Lecture 2 Slides]<br />
<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications (Historical: http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.1999.356)<br />
* MIR Application Design<br />
** Audio input, analysis<br />
** Statistical/perceptual processing<br />
** Data storage<br />
** Post-processing<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
* Other Feature domains<br />
** Wavelets, LPC<br />
<br />
<br><u>Lab 1:</u> <br><br />
<br />
*Matlab Introduction.<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab0/lab0.html Fundamentals of Matlab]<br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2013/Lab%201%20-%20Basic%20feature%20extraction%20and%20classification%20%282013%29.htm HTML Lab 1 - Basic Feature Extraction and Classification] <br><br />
<br />
<br />
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code><br />
** To receive an up-to-date version of the repository, from your repository folder: <code>git pull</code><br />
<br />
Students who need a personal tutorial of Matlab or audio signal processing will split off and received small group assistance to bring them up to speed.<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1.html Fundamentals of Digital Audio Signal Processing (FFT, STFT, Windowing, Zero-padding, 2-D Time-frequency representation)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture3.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([https://discover.izotope.com/ Rhythmic Similarity])<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_pitch.pdf Lecture 4 Slides]<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab 2:''' <br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
* See [https://github.com/stevetjoa/ccrma/blob/master/odf_of_file.m Onset Detection Function example] within the MIR matlab codebase in Octave/Matlab.<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM - Steve [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_ML.pdf Lecture 3 Slides]<br />
<br />
Demo: iZotope Discover (Sound Similarity Search) [http://www.izotope.com/tech/cloud/mediamined.asp Video] <br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
<br />
Matlab code for key estimation, chord recognition: <br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
Presenter: Steve Tjoa<br />
Guest Speaker: Nick Bryan, Gautham Mysore<br />
<br />
<u>Day 4: Music Information Retrieval in Polyphonic Mixtures</u> [http://ccrma.stanford.edu/workshops/mir2013/ccrma20130627.pdf Lecture 4 Slides]<br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Nick & Gautham's latest publications: <br><br />
https://ccrma.stanford.edu/~gautham/Site/Publications.html<br><br />
https://ccrma.stanford.edu/~njb/<br />
<br />
Nick Mini Course of Source Separation: <br><br />
https://ccrma.stanford.edu/~njb/teaching/sstutorial/<br />
<br />
'''Lab 4'''<br />
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
=== Day 5: Information Retrieval Metrics, Evaluation, Real World Considerations ===<br />
Presenters: Leigh Smith<br />
<br />
* [https://ccrma.stanford.edu/workshops/mir2012/CCRMA%202012%20day1%20v5.pdf Day 5 Slides (.pdf)]<br />
<br />
References: <br />
** IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
<br />
'''Lab 5'''<br />
Chroma, Key estimation, and Chord recognition: <br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
<br><br />
<br />
=== Bonus Lab material ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2013&diff=15127MIR workshop 20132013-06-27T20:36:17Z<p>Kiemyang: /* Day 4: Music Information Retrieval in Polyphonic Mixtures */</p>
<hr />
<div><b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
== Logistics ==<br />
Workshop Title: '''Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval'''<br />
* Monday, June 24, through Friday, June 28, 2013. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.izotope.com iZotope, Inc.], <br />
** [http://stevetjoa.com/, Steve Tjoa]<br />
** Leigh Smith, [http://www.izotope.com iZotope, Inc.]<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing?<br />
This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.<br />
<br />
MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.<br />
<br />
This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.<br />
<br />
== Schedule: Lectures & Labs ==<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis and Feature Extraction ===<br />
Presenters: Jay LeBoeuf, Leigh Smith<br />
<br />
'''Glossary of Terms to be used in this course <work in progress>'''<br />
<br />
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture1.pdf Lecture 1 Slides]<br />
<br />
* Introductions <br />
* CCRMA Introduction - (Carr/Sasha). <br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Features: ZCR, Spectral moments; Scaling of feature data <br />
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")<br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics (Part 1)<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
<br />
<br><u>Day 1: Part 2</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture2.pdf Lecture 2 Slides]<br />
<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications (Historical: http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.1999.356)<br />
* MIR Application Design<br />
** Audio input, analysis<br />
** Statistical/perceptual processing<br />
** Data storage<br />
** Post-processing<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
* Other Feature domains<br />
** Wavelets, LPC<br />
<br />
<br><u>Lab 1:</u> <br><br />
<br />
*Matlab Introduction.<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab0/lab0.html Fundamentals of Matlab]<br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2013/Lab%201%20-%20Basic%20feature%20extraction%20and%20classification%20%282013%29.htm HTML Lab 1 - Basic Feature Extraction and Classification] <br><br />
<br />
<br />
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code><br />
** To receive an up-to-date version of the repository, from your repository folder: <code>git pull</code><br />
<br />
Students who need a personal tutorial of Matlab or audio signal processing will split off and received small group assistance to bring them up to speed.<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1.html Fundamentals of Digital Audio Signal Processing (FFT, STFT, Windowing, Zero-padding, 2-D Time-frequency representation)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture3.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([https://discover.izotope.com/ Rhythmic Similarity])<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_pitch.pdf Lecture 4 Slides]<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab 2:''' <br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
* See [https://github.com/stevetjoa/ccrma/blob/master/odf_of_file.m Onset Detection Function example] within the MIR matlab codebase in Octave/Matlab.<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
(PCA, LDA, k-means, SVM) - Steve<br />
<br />
(see LDA lab 2009 day 5)<br />
<br />
Demo: iZotope Discover (Sound Similarity Search) [http://www.izotope.com/tech/cloud/mediamined.asp Video] <br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
<br />
Matlab code for key estimation, chord recognition: <br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
Presenter: Steve Tjoa<br />
Guest Speaker: Nick Bryan, Gautham Mysore<br />
<br />
<u>Day 4: Music Information Retrieval in Polyphonic Mixtures</u> [http://ccrma.stanford.edu/workshops/mir2013/ccrma20130627.pdf Lecture 4 Slides]<br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Nick & Gautham's latest publications: <br><br />
https://ccrma.stanford.edu/~gautham/Site/Publications.html<br><br />
https://ccrma.stanford.edu/~njb/<br />
<br />
Nick Mini Course of Source Separation: <br><br />
https://ccrma.stanford.edu/~njb/teaching/sstutorial/<br />
<br />
'''Lab 4'''<br />
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
=== Day 5: Information Retrieval Metrics, Evaluation, Real World Considerations ===<br />
Presenters: Leigh Smith<br />
<br />
* [https://ccrma.stanford.edu/workshops/mir2012/CCRMA%202012%20day1%20v5.pdf Day 5 Slides (.pdf)]<br />
<br />
References: <br />
** IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
<br />
'''Lab 5'''<br />
Chroma, Key estimation, and Chord recognition: <br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
<br><br />
<br />
=== Bonus Lab material ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2013&diff=15126MIR workshop 20132013-06-27T20:31:12Z<p>Kiemyang: /* Day 4: Music Information Retrieval in Polyphonic Mixtures */</p>
<hr />
<div><b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
== Logistics ==<br />
Workshop Title: '''Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval'''<br />
* Monday, June 24, through Friday, June 28, 2013. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** [http://www.linkedin.com/in/jayleboeuf/ Jay LeBoeuf], [http://www.izotope.com iZotope, Inc.], <br />
** [http://stevetjoa.com/, Steve Tjoa]<br />
** Leigh Smith, [http://www.izotope.com iZotope, Inc.]<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing?<br />
This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.<br />
<br />
MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.<br />
<br />
This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.<br />
<br />
== Schedule: Lectures & Labs ==<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis and Feature Extraction ===<br />
Presenters: Jay LeBoeuf, Leigh Smith<br />
<br />
'''Glossary of Terms to be used in this course <work in progress>'''<br />
<br />
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture1.pdf Lecture 1 Slides]<br />
<br />
* Introductions <br />
* CCRMA Introduction - (Carr/Sasha). <br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Features: ZCR, Spectral moments; Scaling of feature data <br />
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")<br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics (Part 1)<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
<br />
<br><u>Day 1: Part 2</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture2.pdf Lecture 2 Slides]<br />
<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications (Historical: http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.1999.356)<br />
* MIR Application Design<br />
** Audio input, analysis<br />
** Statistical/perceptual processing<br />
** Data storage<br />
** Post-processing<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
* Other Feature domains<br />
** Wavelets, LPC<br />
<br />
<br><u>Lab 1:</u> <br><br />
<br />
*Matlab Introduction.<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab0/lab0.html Fundamentals of Matlab]<br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2013/Lab%201%20-%20Basic%20feature%20extraction%20and%20classification%20%282013%29.htm HTML Lab 1 - Basic Feature Extraction and Classification] <br><br />
<br />
<br />
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code><br />
** To receive an up-to-date version of the repository, from your repository folder: <code>git pull</code><br />
<br />
Students who need a personal tutorial of Matlab or audio signal processing will split off and received small group assistance to bring them up to speed.<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1.html Fundamentals of Digital Audio Signal Processing (FFT, STFT, Windowing, Zero-padding, 2-D Time-frequency representation)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture3.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([https://discover.izotope.com/ Rhythmic Similarity])<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_pitch.pdf Lecture 4 Slides]<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab 2:''' <br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
* See [https://github.com/stevetjoa/ccrma/blob/master/odf_of_file.m Onset Detection Function example] within the MIR matlab codebase in Octave/Matlab.<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
(PCA, LDA, k-means, SVM) - Steve<br />
<br />
(see LDA lab 2009 day 5)<br />
<br />
Demo: iZotope Discover (Sound Similarity Search) [http://www.izotope.com/tech/cloud/mediamined.asp Video] <br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
<br />
Matlab code for key estimation, chord recognition: <br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
Presenter: Steve Tjoa<br />
Guest Speaker: Nick Bryan, Gautham Mysore<br />
<br />
<u>Day 4: Music Information Retrieval in Polyphonic Mixtures</u><br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
Nick & Gautham's latest publications: <br><br />
https://ccrma.stanford.edu/~gautham/Site/Publications.html<br><br />
https://ccrma.stanford.edu/~njb/<br />
<br />
Nick Mini Course of Source Separation: <br><br />
https://ccrma.stanford.edu/~njb/teaching/sstutorial/<br />
<br />
'''Lab 4'''<br />
* [https://github.com/stevetjoa/ccrma#lab-4 Lab 4 Description]<br />
<br />
=== Day 5: Information Retrieval Metrics, Evaluation, Real World Considerations ===<br />
Presenters: Leigh Smith<br />
<br />
* [https://ccrma.stanford.edu/workshops/mir2012/CCRMA%202012%20day1%20v5.pdf Day 5 Slides (.pdf)]<br />
<br />
References: <br />
** IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
<br />
'''Lab 5'''<br />
Chroma, Key estimation, and Chord recognition: <br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
<br><br />
<br />
=== Bonus Lab material ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
** [http://ccrma.stanford.edu/workshops/mir2013/Lab5-SVMs.htm SVM Lab]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_Workshop_2013&diff=15104MIR Workshop 20132013-06-26T01:05:06Z<p>Kiemyang: </p>
<hr />
<div>Go here instead: https://ccrma.stanford.edu/wiki/MIR_workshop_2013</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_Workshop_2013&diff=15103MIR Workshop 20132013-06-25T23:43:37Z<p>Kiemyang: Blanked the page</p>
<hr />
<div></div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2013&diff=15100MIR workshop 20132013-06-25T23:13:50Z<p>Kiemyang: /* Day 2: Beat, Rhythm, Pitch and Chroma Analysis */</p>
<hr />
<div><b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
== Logistics ==<br />
Workshop Title: '''Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval'''<br />
* Monday, June 24, through Friday, June 28, 2013. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** Jay LeBoeuf, [http://www.izotope.com iZotope, Inc.], <br />
** [http://stevetjoa.com/, Steve Tjoa]<br />
** Leigh Smith, [http://www.izotope.com iZotope, Inc.]<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing?<br />
This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.<br />
<br />
MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.<br />
<br />
This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.<br />
<br />
== Schedule: Lectures & Labs ==<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis and Feature Extraction ===<br />
Presenters: Jay LeBoeuf, Leigh Smith<br />
<br />
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture1.pdf Lecture 1 Slides]<br />
<br />
* Introductions <br />
* CCRMA Introduction - (Carr/Sasha). <br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Features: ZCR, Spectral moments; Scaling of feature data <br />
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")<br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics (Part 1)<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
<br />
<br><u>Day 1: Part 2</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture2.pdf Lecture 2 Slides]<br />
<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications (Historical: http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.1999.356)<br />
* MIR Application Design<br />
** Audio input, analysis<br />
** Statistical/perceptual processing<br />
** Data storage<br />
** Post-processing<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
* Other Feature domains<br />
** Wavelets, LPC<br />
<br />
<br><u>Lab 1:</u> <br><br />
<br />
*Matlab Introduction.<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab0/lab0.html Fundamentals of Matlab]<br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
* [http://ccrma.stanford.edu/workshops/mir2013/Lab%201%20-%20Basic%20feature%20extraction%20and%20classification%20%282013%29.pdf PDF Lab 1 - Basic Feature Extraction and Classification] <br><br />
* [http://ccrma.stanford.edu/workshops/mir2013/Lab%201%20-%20Basic%20feature%20extraction%20and%20classification%20%282013%29.htm HTML Lab 1 - Basic Feature Extraction and Classification] <br><br />
<br />
<br />
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code><br />
** To receive an up-to-date version of the repository, from your repository folder: <code>git pull</code><br />
<br />
Students who need a personal tutorial of Matlab or audio signal processing will split off and received small group assistance to bring them up to speed.<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1.html Fundamentals of Digital Audio Signal Processing (FFT, STFT, Windowing, Zero-padding, 2-D Time-frequency representation)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture3.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([http://discover.mediamined.com/analysisSubmission.php Rhythmic Similarity ])<br />
<br />
<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_pitch.pdf Lecture 4 Slides]<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab 2:''' <br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
* [https://github.com/stevetjoa/ccrma#lab-2 Lab 2 description]<br />
<br />
Matlab code for key estimation, chord recognition: <br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
** [https://ccrma.stanford.edu/workshops/mir2012/ODF.zip Onset Detection Function example code in Octave/Matlab]<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
(PCA, LDA, k-means, SVM) - Steve<br />
<br />
(see LDA lab 2009 day 5)<br />
<br />
Demo: iZotope Discover (Sound Similarity Search) [http://www.izotope.com/tech/cloud/mediamined.asp Video] [http://discover-test.mediamined.com/login.html login]<br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
* [http://ccrma.stanford.edu/workshops/mir2012/Lab5-SVMs.pdf SVM]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
Presenter: Steve Tjoa, Nick Bryant, Gautham Mysore<br />
<br />
<u>Day 4, Part 1: Music Information Retrieval in Polyphonic Mixtures</u><br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
<u>Day 4, Part 2: TBD</u><br />
<br />
<br />
'''Lab 4'''<br />
* [http://ccrma.stanford.edu/workshops/mir2012/tjoa20120627ccrma.pdf Lecture and Lab 3 Slides, Steve Tjoa, 2012]<br />
<br />
=== Day 5: Information Retrieval Metrics, Evaluation, Real World Considerations ===<br />
Presenters: Leigh Smith<br />
<br />
* [https://ccrma.stanford.edu/workshops/mir2012/CCRMA%202012%20day1%20v5.pdf Day 5 Slides (.pdf)]<br />
<br />
References: <br />
** IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
<br />
'''Lab 5'''<br />
Chroma, Key estimation, and Chord recognition: <br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
<br><br />
<br />
=== Bonus Lab material ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2013&diff=15099MIR workshop 20132013-06-25T23:12:08Z<p>Kiemyang: /* Day 2: Beat, Rhythm, Pitch and Chroma Analysis */</p>
<hr />
<div><b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
== Logistics ==<br />
Workshop Title: '''Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval'''<br />
* Monday, June 24, through Friday, June 28, 2013. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** Jay LeBoeuf, [http://www.izotope.com iZotope, Inc.], <br />
** [http://stevetjoa.com/, Steve Tjoa]<br />
** Leigh Smith, [http://www.izotope.com iZotope, Inc.]<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing?<br />
This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.<br />
<br />
MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.<br />
<br />
This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.<br />
<br />
== Schedule: Lectures & Labs ==<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis and Feature Extraction ===<br />
Presenters: Jay LeBoeuf, Leigh Smith<br />
<br />
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture1.pdf Lecture 1 Slides]<br />
<br />
* Introductions <br />
* CCRMA Introduction - (Carr/Sasha). <br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Features: ZCR, Spectral moments; Scaling of feature data <br />
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")<br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics (Part 1)<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
<br />
<br><u>Day 1: Part 2</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture2.pdf Lecture 2 Slides]<br />
<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications (Historical: http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.1999.356)<br />
* MIR Application Design<br />
** Audio input, analysis<br />
** Statistical/perceptual processing<br />
** Data storage<br />
** Post-processing<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
* Other Feature domains<br />
** Wavelets, LPC<br />
<br />
<br><u>Lab 1:</u> <br><br />
<br />
*Matlab Introduction.<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab0/lab0.html Fundamentals of Matlab]<br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
* [http://ccrma.stanford.edu/workshops/mir2013/Lab%201%20-%20Basic%20feature%20extraction%20and%20classification%20%282013%29.pdf PDF Lab 1 - Basic Feature Extraction and Classification] <br><br />
* [http://ccrma.stanford.edu/workshops/mir2013/Lab%201%20-%20Basic%20feature%20extraction%20and%20classification%20%282013%29.htm HTML Lab 1 - Basic Feature Extraction and Classification] <br><br />
<br />
<br />
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code><br />
** To receive an up-to-date version of the repository, from your repository folder: <code>git pull</code><br />
<br />
Students who need a personal tutorial of Matlab or audio signal processing will split off and received small group assistance to bring them up to speed.<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1.html Fundamentals of Digital Audio Signal Processing (FFT, STFT, Windowing, Zero-padding, 2-D Time-frequency representation)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture3.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([http://discover.mediamined.com/analysisSubmission.php Rhythmic Similarity ])<br />
<br />
<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_pitch.pdf Lecture 4 Slides]<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab 2:''' <br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
* [http://ccrma.stanford.edu/workshops/mir2012/FeatureDetection_lab2_2012.pdf Feature extraction and cross-validation in MATLAB]<br />
<br />
Matlab code for key estimation, chord recognition: <br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
** [https://ccrma.stanford.edu/workshops/mir2012/ODF.zip Onset Detection Function example code in Octave/Matlab]<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
(PCA, LDA, k-means, SVM) - Steve<br />
<br />
(see LDA lab 2009 day 5)<br />
<br />
Demo: iZotope Discover (Sound Similarity Search) [http://www.izotope.com/tech/cloud/mediamined.asp Video] [http://discover-test.mediamined.com/login.html login]<br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
* [http://ccrma.stanford.edu/workshops/mir2012/Lab5-SVMs.pdf SVM]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
Presenter: Steve Tjoa, Nick Bryant, Gautham Mysore<br />
<br />
<u>Day 4, Part 1: Music Information Retrieval in Polyphonic Mixtures</u><br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
<u>Day 4, Part 2: TBD</u><br />
<br />
<br />
'''Lab 4'''<br />
* [http://ccrma.stanford.edu/workshops/mir2012/tjoa20120627ccrma.pdf Lecture and Lab 3 Slides, Steve Tjoa, 2012]<br />
<br />
=== Day 5: Information Retrieval Metrics, Evaluation, Real World Considerations ===<br />
Presenters: Leigh Smith<br />
<br />
* [https://ccrma.stanford.edu/workshops/mir2012/CCRMA%202012%20day1%20v5.pdf Day 5 Slides (.pdf)]<br />
<br />
References: <br />
** IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
<br />
'''Lab 5'''<br />
Chroma, Key estimation, and Chord recognition: <br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
<br><br />
<br />
=== Bonus Lab material ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2013&diff=15098MIR workshop 20132013-06-25T14:44:07Z<p>Kiemyang: /* Day 2: Beat, Rhythm, Pitch and Chroma Analysis */</p>
<hr />
<div><b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
== Logistics ==<br />
Workshop Title: '''Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval'''<br />
* Monday, June 24, through Friday, June 28, 2013. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** Jay LeBoeuf, [http://www.izotope.com iZotope, Inc.], <br />
** [http://stevetjoa.com/, Steve Tjoa]<br />
** Leigh Smith, [http://www.izotope.com iZotope, Inc.]<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing?<br />
This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.<br />
<br />
MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.<br />
<br />
This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.<br />
<br />
== Schedule: Lectures & Labs ==<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis and Feature Extraction ===<br />
Presenters: Jay LeBoeuf, Leigh Smith<br />
<br />
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture1.pdf Lecture 1 Slides]<br />
<br />
* Introductions <br />
* CCRMA Introduction - (Carr/Sasha). <br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Features: ZCR, Spectral moments; Scaling of feature data <br />
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")<br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics (Part 1)<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
<br />
<br><u>Day 1: Part 2</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture2.pdf Lecture 2 Slides]<br />
<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications (Historical: http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.1999.356)<br />
* MIR Application Design<br />
** Audio input, analysis<br />
** Statistical/perceptual processing<br />
** Data storage<br />
** Post-processing<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
* Other Feature domains<br />
** Wavelets, LPC<br />
<br />
<br><u>Lab 1:</u> <br><br />
<br />
*Matlab Introduction.<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab0/lab0.html Fundamentals of Matlab]<br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
* [http://ccrma.stanford.edu/workshops/mir2013/Lab%201%20-%20Basic%20feature%20extraction%20and%20classification%20%282013%29.pdf PDF Lab 1 - Basic Feature Extraction and Classification] <br><br />
* [http://ccrma.stanford.edu/workshops/mir2013/Lab%201%20-%20Basic%20feature%20extraction%20and%20classification%20%282013%29.htm HTML Lab 1 - Basic Feature Extraction and Classification] <br><br />
<br />
<br />
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code><br />
** To receive an up-to-date version of the repository, from your repository folder: <code>git pull</code><br />
<br />
Students who need a personal tutorial of Matlab or audio signal processing will split off and received small group assistance to bring them up to speed.<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1.html Fundamentals of Digital Audio Signal Processing (FFT, STFT, Windowing, Zero-padding, 2-D Time-frequency representation)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture3.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([http://discover.mediamined.com/analysisSubmission.php Rhythmic Similarity ])<br />
<br />
<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> (slides to be uploaded after lecture)<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab 2:''' <br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
* [http://ccrma.stanford.edu/workshops/mir2012/FeatureDetection_lab2_2012.pdf Feature extraction and cross-validation in MATLAB]<br />
<br />
Matlab code for key estimation, chord recognition: <br />
* [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
** [https://ccrma.stanford.edu/workshops/mir2012/ODF.zip Onset Detection Function example code in Octave/Matlab]<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
(PCA, LDA, k-means, SVM) - Steve<br />
<br />
(see LDA lab 2009 day 5)<br />
<br />
Demo: iZotope Discover (Sound Similarity Search) [http://www.izotope.com/tech/cloud/mediamined.asp Video] [http://discover-test.mediamined.com/login.html login]<br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
* [http://ccrma.stanford.edu/workshops/mir2012/Lab5-SVMs.pdf SVM]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
Presenter: Steve Tjoa, Nick Bryant, Gautham Mysore<br />
<br />
<u>Day 4, Part 1: Music Information Retrieval in Polyphonic Mixtures</u><br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
<u>Day 4, Part 2: TBD</u><br />
<br />
<br />
'''Lab 4'''<br />
* [http://ccrma.stanford.edu/workshops/mir2012/tjoa20120627ccrma.pdf Lecture and Lab 3 Slides, Steve Tjoa, 2012]<br />
<br />
=== Day 5: Information Retrieval Metrics, Evaluation, Real World Considerations ===<br />
Presenters: Leigh Smith<br />
<br />
* [https://ccrma.stanford.edu/workshops/mir2012/CCRMA%202012%20day1%20v5.pdf Day 5 Slides (.pdf)]<br />
<br />
References: <br />
** IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
<br />
'''Lab 5'''<br />
Chroma, Key estimation, and Chord recognition: <br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
<br><br />
<br />
=== Bonus Lab material ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2013&diff=15094MIR workshop 20132013-06-24T23:26:30Z<p>Kiemyang: /* Day 1: Introduction to MIR, Signal Analysis and Feature Extraction */</p>
<hr />
<div><b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
== Logistics ==<br />
Workshop Title: '''Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval'''<br />
* Monday, June 24, through Friday, June 28, 2013. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** Jay LeBoeuf, [http://www.izotope.com iZotope, Inc.], <br />
** [http://stevetjoa.com/, Steve Tjoa]<br />
** Leigh Smith, [http://www.izotope.com iZotope, Inc.]<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing?<br />
This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.<br />
<br />
MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.<br />
<br />
This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.<br />
<br />
== Schedule: Lectures & Labs ==<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis and Feature Extraction ===<br />
Presenters: Jay LeBoeuf, Leigh Smith<br />
<br />
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture1.pdf Lecture 1 Slides]<br />
<br />
* Introductions <br />
* CCRMA Introduction - (Carr/Sasha). <br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Features: ZCR, Spectral moments; Scaling of feature data <br />
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")<br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics (Part 1)<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
<br />
<br><u>Day 1: Part 2</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture2.pdf Lecture 2 Slides]<br />
<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications (Historical: http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.1999.356)<br />
* MIR Application Design<br />
** Audio input, analysis<br />
** Statistical/perceptual processing<br />
** Data storage<br />
** Post-processing<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
* Other Feature domains<br />
** Wavelets, LPC<br />
<br />
<br><u>Lab 1:</u> <br><br />
<br />
*Matlab Introduction.<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab0/lab0.html Fundamentals of Matlab]<br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
* [http://ccrma.stanford.edu/workshops/mir2013/Lab%201%20-%20Basic%20feature%20extraction%20and%20classification%20%282013%29.pdf Lab 1 - Basic Feature Extraction and Classification] <br><br />
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code><br />
** To receive an up-to-date version of the repository, from your repository folder: <code>git pull</code><br />
<br />
Students who need a personal tutorial of Matlab or audio signal processing will split off and received small group assistance to bring them up to speed.<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1.html Fundamentals of Digital Audio Signal Processing (FFT, STFT, Windowing, Zero-padding, 2-D Time-frequency representation)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2012/CCRMA_MIR2012_Beat.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([http://discover.mediamined.com/analysisSubmission.php Rhythmic Similarity ])<br />
<br />
<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> [http://ccrma.stanford.edu/workshops/mir2011/ccrma_2011_pitch_reps.pdf Pitch Representation Slides (.pdf), George Tzanetakis]<br />
<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab 2:''' <br />
Proposed topic:<br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
'''WORK IN PROGRESS''' <br />
* [http://ccrma.stanford.edu/workshops/mir2012/FeatureDetection_lab2_2012.pdf Feature extraction and cross-validation in MATLAB]<br />
'''WORK IN PROGRESS''' <br />
<br />
Matlab code for key estimation, chord recognition: <br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
** [https://ccrma.stanford.edu/workshops/mir2012/ODF.zip Onset Detection Function example code in Octave/Matlab]<br />
<br />
* Kyogu Lee's Onset Detection Examples / Tempo <br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/CATbox_v0.zip download CATbox (Computer Audition Toolbox): CATbox_v0.zip]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1_5.m Onset Time-domain method (lab1_5.m)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1_6.m Frequency-domain method: lab1_6.m]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1_7.m Phase-based method: lab1_7.m]<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
* Kyogu Lee's Example Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab2/labrosa-coversongid.tgz download Dan Ellis' coversong id toolbox: coversongs]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/wav/04_rock_and_roll_music.wav download an audio file]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab2/post_proc.m post processing] <br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab2/adpthresholding.m adaptive thresholding]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab2/lab2_1.m Tempo estimation, beat tracking]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab2/lab2_2.m Extract new features]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
(PCA, LDA, k-means, SVM) - Steve<br />
<br />
(see LDA lab 2009 day 5)<br />
<br />
Demo: iZotope Discover (Sound Similarity Search) [http://www.izotope.com/tech/cloud/mediamined.asp Video] [http://discover-test.mediamined.com/login.html login]<br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
* [http://ccrma.stanford.edu/workshops/mir2012/Lab5-SVMs.pdf SVM]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
Presenter: Steve Tjoa, Nick Bryant, Gautham Mysore<br />
<br />
<u>Day 4, Part 1: Music Information Retrieval in Polyphonic Mixtures</u><br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
<u>Day 4, Part 2: TBD</u><br />
<br />
<br />
'''Lab 4'''<br />
* [http://ccrma.stanford.edu/workshops/mir2012/tjoa20120627ccrma.pdf Lecture and Lab 3 Slides, Steve Tjoa, 2012]<br />
<br />
=== Day 5: Information Retrieval Metrics, Evaluation, Real World Considerations ===<br />
Presenters: Leigh Smith<br />
<br />
* [https://ccrma.stanford.edu/workshops/mir2012/CCRMA%202012%20day1%20v5.pdf Day 5 Slides (.pdf)]<br />
<br />
References: <br />
** IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
<br />
'''Lab 5'''<br />
Chroma, Key estimation, and Chord recognition: <br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
<br><br />
<br />
=== Bonus Lab material ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/</div>Kiemyanghttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2013&diff=15089MIR workshop 20132013-06-24T08:15:35Z<p>Kiemyang: /* Day 1: Introduction to MIR, Signal Analysis and Feature Extraction */</p>
<hr />
<div><b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
== Logistics ==<br />
Workshop Title: '''Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval'''<br />
* Monday, June 24, through Friday, June 28, 2013. 9 AM to 5 PM every day.<br />
* Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx<br />
* Instructors: <br />
** Jay LeBoeuf, [http://www.izotope.com iZotope, Inc.], <br />
** [http://stevetjoa.com/, Steve Tjoa]<br />
** Leigh Smith, [http://www.izotope.com iZotope, Inc.]<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing?<br />
This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.<br />
<br />
MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.<br />
<br />
This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.<br />
<br />
== Schedule: Lectures & Labs ==<br />
<br />
=== Day 1: Introduction to MIR, Signal Analysis and Feature Extraction ===<br />
Presenters: Jay LeBoeuf, Leigh Smith<br />
<br />
<br><u>Day 1: Part 1</u> [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture1.pdf Lecture 1 Slides]<br />
<br />
* Introductions <br />
* CCRMA Introduction - (Carr/Sasha). <br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Features: ZCR, Spectral moments; Scaling of feature data <br />
* Demo: Using simple heuristics and thresholds (i.e. "Why do we need machine learning?")<br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics (Part 1)<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
<br />
<br><u>Day 1: Part 2</u> <br />
[http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_Lecture2.pdf Lecture 2 Slides]<br />
<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications (Historical: http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.1999.356)<br />
* MIR Application Design<br />
** Audio input, analysis<br />
** Statistical/perceptual processing<br />
** Data storage<br />
** Post-processing<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
* Other Feature domains<br />
** Wavelets, LPC<br />
<br />
<br><u>Lab 1:</u> <br><br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
* [http://ccrma.stanford.edu/workshops/mir2013/Lab%201%20-%20Basic%20feature%20extraction%20and%20classification%20%282013%29.pdf Lab 1 - Basic Feature Extraction and Classification] <br><br />
* From your home directory, simply type the following to obtain a copy of the repository: <code>git clone https://github.com/stevetjoa/ccrma.git</code><br />
<br />
Students who need a personal tutorial of Matlab or audio signal processing will split off and received small group assistance to bring them up to speed.<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab0/lab0.html Fundamentals of Matlab]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1.html Fundamentals of Digital Audio Signal Processing (FFT, STFT, Windowing, Zero-padding, 2-D Time-frequency representation)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
=== Day 2: Beat, Rhythm, Pitch and Chroma Analysis ===<br />
Presenters: Leigh Smith, Steve Tjoa<br />
<br />
<br><u>Day 2: Part 1 Beat-finding and Rhythm Analysis</u> [http://ccrma.stanford.edu/workshops/mir2012/CCRMA_MIR2012_Beat.pdf Lecture 3 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
Demo: MediaMined Discover ([http://discover.mediamined.com/analysisSubmission.php Rhythmic Similarity ])<br />
<br />
<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
<br />
<br><u>Day 2, Part 2: Pitch and Chroma Analysis</u> [http://ccrma.stanford.edu/workshops/mir2011/ccrma_2011_pitch_reps.pdf Pitch Representation Slides (.pdf), George Tzanetakis]<br />
<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Query-by-humming <br />
** Music Transcription <br />
<br />
'''Lab 2:''' <br />
Proposed topic:<br />
Part 1: Tempo Extraction<br />
Part 2: Add in MFCCs to classification and test w Cross validation <br />
'''WORK IN PROGRESS''' <br />
* [http://ccrma.stanford.edu/workshops/mir2012/FeatureDetection_lab2_2012.pdf Feature extraction and cross-validation in MATLAB]<br />
'''WORK IN PROGRESS''' <br />
<br />
Matlab code for key estimation, chord recognition: <br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
** [https://ccrma.stanford.edu/workshops/mir2012/ODF.zip Onset Detection Function example code in Octave/Matlab]<br />
<br />
* Kyogu Lee's Onset Detection Examples / Tempo <br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/CATbox_v0.zip download CATbox (Computer Audition Toolbox): CATbox_v0.zip]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1_5.m Onset Time-domain method (lab1_5.m)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1_6.m Frequency-domain method: lab1_6.m]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1_7.m Phase-based method: lab1_7.m]<br />
<br />
* Bonus Slides: Temporal & Harmony Analysis <br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/4_rhythm.pdf Temporal Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
<br />
* Kyogu Lee's Example Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab2/labrosa-coversongid.tgz download Dan Ellis' coversong id toolbox: coversongs]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/wav/04_rock_and_roll_music.wav download an audio file]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab2/post_proc.m post processing] <br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab2/adpthresholding.m adaptive thresholding]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab2/lab2_1.m Tempo estimation, beat tracking]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab2/lab2_2.m Extract new features]<br />
<br />
=== Day 3: Machine Learning, Clustering and Classification ===<br />
(PCA, LDA, k-means, SVM) - Steve<br />
<br />
(see LDA lab 2009 day 5)<br />
<br />
Demo: iZotope Discover (Sound Similarity Search) [http://www.izotope.com/tech/cloud/mediamined.asp Video] [http://discover-test.mediamined.com/login.html login]<br />
<br />
'''Lab 3'''<br />
Topic: MFCC + k-Means, Clustering<br />
* [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means]<br />
* [http://ccrma.stanford.edu/workshops/mir2012/Lab5-SVMs.pdf SVM]<br />
<br />
=== Day 4: Music Information Retrieval in Polyphonic Mixtures ===<br />
Presenter: Steve Tjoa, Nick Bryant, Gautham Mysore<br />
<br />
<u>Day 4, Part 1: Music Information Retrieval in Polyphonic Mixtures</u><br />
* Music Transcription and Source Separation<br />
* Nonnegative Matrix Factorization<br />
* Sparse Coding<br />
<br />
<u>Day 4, Part 2: TBD</u><br />
<br />
<br />
'''Lab 4'''<br />
* [http://ccrma.stanford.edu/workshops/mir2012/tjoa20120627ccrma.pdf Lecture and Lab 3 Slides, Steve Tjoa, 2012]<br />
<br />
=== Day 5: Information Retrieval Metrics, Evaluation, Real World Considerations ===<br />
Presenters: Leigh Smith<br />
<br />
* [https://ccrma.stanford.edu/workshops/mir2012/CCRMA%202012%20day1%20v5.pdf Day 5 Slides (.pdf)]<br />
<br />
References: <br />
** IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
<br />
'''Lab 5'''<br />
Chroma, Key estimation, and Chord recognition: <br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
<br><br />
<br />
=== Bonus Lab material ===<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
** [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
<br />
* Downloads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
<br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Notes<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2012]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2011 CCRMA MIR Summer Workshop 2011]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2010 CCRMA MIR Summer Workshop 2010]<br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/</div>Kiemyang