https://ccrma.stanford.edu/mediawiki/api.php?action=feedcontributions&user=Deck&feedformat=atomCCRMA Wiki - User contributions [en]2024-03-28T08:56:07ZUser contributionsMediaWiki 1.24.1https://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2011&diff=11994MIR workshop 20112011-07-01T21:16:50Z<p>Deck: /* Lectures & Labs */</p>
<hr />
<div><b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
== Logistics ==<br />
Workshop Title: '''"Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval"<br />
'''<br />
* 9-5 PM. Mon, 06/27/2011 - Fri, 07/01/2011<br />
* Instructors: <br />
- Jay LeBoeuf, [http://www.imagine-research.com Imagine Research ]<br />
- Rebecca Fiebrink, [http://www.cs.princeton.edu/~fiebrink/Rebecca_Fiebrink/welcome.html Princeton University]<br />
- Douglas Eck, Google Research [http://research.google.com Google]<br />
- Stephen Pope, [http://www.imagine-research.com Imagine Research ]<br />
- Steve Tjoa, University of Maryland / [http://www.imagine-research.com Imagine Research ]<br />
- Leigh Smith, [http://www.imagine-research.com Imagine Research ]<br />
- George Tzanetakis, [http://webhome.cs.uvic.ca/~gtzan/ University of Victoria]<br />
<br />
* Participants:<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing?<br />
This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.<br />
<br />
MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.<br />
<br />
This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.<br />
<br />
== Lectures & Labs ==<br />
<br><u>Day 1:</u> [https://ccrma.stanford.edu/workshops/mir2011/CCRMA_2011_day1.pdf Lecture 1 Slides]<br />
* '''Presenters: Jay LeBoeuf & Rebecca Fiebrink'''<br />
* CCRMA Introduction - (Carr/Sasha). CCRMA Tour.<br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Features: ZCR, Spectral moments; Scaling of feature data <br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
** IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
<br><u>Lab 1:</u> <br><br />
* [https://ccrma.stanford.edu/workshops/mir2011/Lab_1_2011.pdf Lab 1 - Basic Feature Extraction and Classification] <br><br />
* [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
* [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
Students who need a personal tutorial of Matlab or audio signal processing will split off and received small group assistance to bring them up to speed.<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab0/lab0.html Fundamentals of Matlab]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1.html Fundamentals of Digital Audio Signal Processing (FFT, STFT, Windowing, Zero-padding, 2-D Time-frequency representation)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
<br><u>Day 2:</u> [https://ccrma.stanford.edu/workshops/mir2011/CCRMA_2011_day2.pdf Lecture 2 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
* '''Presenters: Leigh Smith & Stephen Pope'''<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications (Historical: http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.1999.356)<br />
* MIR Application Design<br />
** Audio input, analysis<br />
** Statistical/perceptual processing<br />
** Data storage<br />
** Post-processing<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
* Other Feature domains<br />
** Wavelets, LPC<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
* Applications<br />
** Song clustering based on a variety of feature vectors<br />
** PCA of feature spaces using Weka<br />
<br><u>Lab 2:</u> <br />
* Feature extraction and flexible feature vectors in MATLAB, Marsyas, Aubio, libExtract<br />
* MATLAB/Weka code for sound clustering with a flexible feature vector<br />
* C++ API examples Marsyas, Aubio, libExtract - pre-built examples to read and customize<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)<br />
* Down-loads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/ODF.zip Onset Detection Function example code in Octave/Matlab]<br />
* Notes on c-API configuration<br />
** FFTW<br />
./configure --help<br><br />
./configure --enable-float<br />
** libSndFile<br />
./configure --disable-external-libs --disable-sqlite<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
<br><u>Day 3</u> <br />
* '''Presenters: Stephen Pope & Steve Tjoa'''<br />
* [http://up.stevetjoa.com/tjoa20110629ccrma.pdf Lecture and Lab 3 Slides by Steve Tjoa]<br />
* [https://ccrma.stanford.edu/workshops/mir2011/CCRMA_2011_day3.pdf Lecture 3 Slides]<br />
* Overview: 2nd-Stage Processing and Post-processing in MIR Applications<br />
* 2nd-Stage Processing<br />
** Thresholds and Data Pruning<br />
** Perceptual Mapping<br />
** Data Reduction: Averaging, GMMs, Running Averages<br />
** Feature-data-smoothing: de-spiking, sticky values, filter, etc.<br />
* Segmentation of music and non-musical audio<br />
** Segmentation based on islands of similar features<br />
** Segmentation based on regular difference peaks<br />
** Segmentation based on labeling<br />
* Post-processing: What are we doing?<br />
** Storing Feature Data: SQL, JSON, XML, etc.<br />
** Classification/Clustering/Transcription/Labeling<br />
* Classification: KNN vs SVM training and testing<br />
** SVM tools and APIs<br />
* Clustering vs Classification: Tree-based systems<br />
* Audio Transcription: Onsets and per-onset features<br />
* Other applications: source separation, similarity match, search, etc.<br />
* Classification/estimation in the presence of polyphony<br />
** Try basic approach on a musical mixture.<br />
** How well does it perform? <br />
** What do we do to improve its performance? ICA, NMF, K-SVD.<br />
** Matrix representations of data: spectrogram, chromagram, timbregram, etc.<br />
** Methods to improve NMF/K-SVD under heavy harmonic overlap<br />
* Applications<br />
** Feature vector pruning<br />
** Segmentation examples<br />
** SVMs for classification<br />
** Multipitch estimation, source separation, denoising<br />
<br><u>Lab 3:</u> <br />
* 2nd-Stage Processing<br />
* SVM tools<br />
* Classification examples<br />
<p><br />
* If you finish early, see the "bonus labs" section below.<br />
<br />
<br><u>Day 4:</u> <br />
* '''Presenters: George Tzanetakis'''<br />
* [http://ccrma.stanford.edu/workshops/mir2011/ccrma_2011_pitch_reps.pdf Pitch Representation Slides (.pdf)]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/ccrma_2011_pitch_reps.pptx Pitch Representation Slides (.pptx)]<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Analysis of religious cantillation (Computational Ethnomusicology) <br />
** Query-by-humming <br />
** Music Transcription <br />
* Tools <br />
** Marsyas <br />
** Python/NumPy/Matplotlib<br />
<br><u>Lab 4: </u><br />
* Marsyas compilation <br />
** Instructions for CCRMA Machines [http://ccrma.stanford.edu/workshops/mir2011/marsyas_ccrma2011.pdf marsyas_ccrma2011.pdf]<br />
*** SKT: If you get an error about Python.h, install the package python2.7-dev (for version 2.7).<br />
* Marsyas tour <br />
* Plotting and prototyping using the Marsyas Python bindings<br />
* Writing some C++ Marsyas code <br />
* DTW in Matlab [http://labrosa.ee.columbia.edu/matlab/dtw/ Dan Ellis DTW Matlab example] <br />
<br />
<br />
<br><u>Day 5:</u> <br />
* '''Presenters: Douglas Eck'''<br />
* [http://ccrma.stanford.edu/workshops/mir2011/CCRMA_2011_day5.pdf Day 5 Slides (.pdf)]<br />
* Music Recomendation<br />
** Overview of music recommendation. What's hard about it.<br />
** Some statistics and observations about the music industry and the need for recomendation.<br />
** Point-Counterpoint: Should we bother with content-based analysis.<br />
* Autotagging<br />
** Features for autotagging (some of this will be review, given days 1 through 4.)<br />
** Demos of clustering for different types of acoustic features.<br />
** Training Data.<br />
** Classifiers (focus on AdaBoost) <br />
** Feature selection.<br />
** Evaluation with lots of examples.<br />
* Time permitting:<br />
** Advanced features <br />
** Sparse coding<br />
** Using musical structure.<br />
<br />
<br><u>Lab 5</u><br />
<br />
See [[MIR_workshop_2011_day5_lab]] for a full description. Here is a summary:<br />
<br />
* The basics (some Python code available to help).<br />
** Calculate acoustic features on CAL500 dataset (students should have already done this.) <br />
** Read in user tag annotations from same dataset provided by UCSD.<br />
** Build similarity matrix based on word vectors derived from these annotations.<br />
** Query similarity matrix with a track to get top hits based on cosine distance.<br />
** Build second similarity matrix using acoustic features. <br />
** Query this similarity matrix with track to get top hits based on cosine distance. <br />
<br />
* Extra (I didn't write code for this, but can help students find examples).<br />
** Query the EchoNest for additional acoustic features and compare to yours. <br />
** Use the CAL500 user annotations as ground truth and evaluate your audio features (ROC curve or some precision measure). <br />
** Compare a 2D visualization of acoustic features versus UCSD user annotations.<br />
<br><br />
A File of features for CAL500 data set (simple feature vector) is in /usr/ccrma/courses/mir2011/Cal500_Features.csv<br />
<br><br />
<br><u>Bonus Lab material</u><br />
* Insert your bonus lab materials here...<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/</div>Deckhttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2011&diff=11980MIR workshop 20112011-07-01T21:00:05Z<p>Deck: /* Lectures & Labs */</p>
<hr />
<div><b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
== Logistics ==<br />
Workshop Title: '''"Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval"<br />
'''<br />
* 9-5 PM. Mon, 06/27/2011 - Fri, 07/01/2011<br />
* Instructors: <br />
- Jay LeBoeuf, [http://www.imagine-research.com Imagine Research ]<br />
- Rebecca Fiebrink, [http://www.cs.princeton.edu/~fiebrink/Rebecca_Fiebrink/welcome.html Princeton University]<br />
- Douglas Eck, Google Research [http://research.google.com Google]<br />
- Stephen Pope, [http://www.imagine-research.com Imagine Research ]<br />
- Steve Tjoa, University of Maryland / [http://www.imagine-research.com Imagine Research ]<br />
- Leigh Smith, [http://www.imagine-research.com Imagine Research ]<br />
- George Tzanetakis, [http://webhome.cs.uvic.ca/~gtzan/ University of Victoria]<br />
<br />
* Participants:<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing?<br />
This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.<br />
<br />
MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.<br />
<br />
This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.<br />
<br />
== Lectures & Labs ==<br />
<br><u>Day 1:</u> [https://ccrma.stanford.edu/workshops/mir2011/CCRMA_2011_day1.pdf Lecture 1 Slides]<br />
* '''Presenters: Jay LeBoeuf & Rebecca Fiebrink'''<br />
* CCRMA Introduction - (Carr/Sasha). CCRMA Tour.<br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Features: ZCR, Spectral moments; Scaling of feature data <br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
** IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
<br><u>Lab 1:</u> <br><br />
* [https://ccrma.stanford.edu/workshops/mir2011/Lab_1_2011.pdf Lab 1 - Basic Feature Extraction and Classification] <br><br />
* [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
* [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
Students who need a personal tutorial of Matlab or audio signal processing will split off and received small group assistance to bring them up to speed.<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab0/lab0.html Fundamentals of Matlab]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1.html Fundamentals of Digital Audio Signal Processing (FFT, STFT, Windowing, Zero-padding, 2-D Time-frequency representation)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
<br><u>Day 2:</u> [https://ccrma.stanford.edu/workshops/mir2011/CCRMA_2011_day2.pdf Lecture 2 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
* '''Presenters: Leigh Smith & Stephen Pope'''<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications (Historical: http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.1999.356)<br />
* MIR Application Design<br />
** Audio input, analysis<br />
** Statistical/perceptual processing<br />
** Data storage<br />
** Post-processing<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
* Other Feature domains<br />
** Wavelets, LPC<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
* Applications<br />
** Song clustering based on a variety of feature vectors<br />
** PCA of feature spaces using Weka<br />
<br><u>Lab 2:</u> <br />
* Feature extraction and flexible feature vectors in MATLAB, Marsyas, Aubio, libExtract<br />
* MATLAB/Weka code for sound clustering with a flexible feature vector<br />
* C++ API examples Marsyas, Aubio, libExtract - pre-built examples to read and customize<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)<br />
* Down-loads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/ODF.zip Onset Detection Function example code in Octave/Matlab]<br />
* Notes on c-API configuration<br />
** FFTW<br />
./configure --help<br><br />
./configure --enable-float<br />
** libSndFile<br />
./configure --disable-external-libs --disable-sqlite<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
<br><u>Day 3</u> <br />
* '''Presenters: Stephen Pope & Steve Tjoa'''<br />
* [http://up.stevetjoa.com/tjoa20110629ccrma.pdf Lecture and Lab 3 Slides by Steve Tjoa]<br />
* [https://ccrma.stanford.edu/workshops/mir2011/CCRMA_2011_day3.pdf Lecture 3 Slides]<br />
* Overview: 2nd-Stage Processing and Post-processing in MIR Applications<br />
* 2nd-Stage Processing<br />
** Thresholds and Data Pruning<br />
** Perceptual Mapping<br />
** Data Reduction: Averaging, GMMs, Running Averages<br />
** Feature-data-smoothing: de-spiking, sticky values, filter, etc.<br />
* Segmentation of music and non-musical audio<br />
** Segmentation based on islands of similar features<br />
** Segmentation based on regular difference peaks<br />
** Segmentation based on labeling<br />
* Post-processing: What are we doing?<br />
** Storing Feature Data: SQL, JSON, XML, etc.<br />
** Classification/Clustering/Transcription/Labeling<br />
* Classification: KNN vs SVM training and testing<br />
** SVM tools and APIs<br />
* Clustering vs Classification: Tree-based systems<br />
* Audio Transcription: Onsets and per-onset features<br />
* Other applications: source separation, similarity match, search, etc.<br />
* Classification/estimation in the presence of polyphony<br />
** Try basic approach on a musical mixture.<br />
** How well does it perform? <br />
** What do we do to improve its performance? ICA, NMF, K-SVD.<br />
** Matrix representations of data: spectrogram, chromagram, timbregram, etc.<br />
** Methods to improve NMF/K-SVD under heavy harmonic overlap<br />
* Applications<br />
** Feature vector pruning<br />
** Segmentation examples<br />
** SVMs for classification<br />
** Multipitch estimation, source separation, denoising<br />
<br><u>Lab 3:</u> <br />
* 2nd-Stage Processing<br />
* SVM tools<br />
* Classification examples<br />
<p><br />
* If you finish early, see the "bonus labs" section below.<br />
<br />
<br><u>Day 4:</u> <br />
* '''Presenters: George Tzanetakis'''<br />
* [http://ccrma.stanford.edu/workshops/mir2011/ccrma_2011_pitch_reps.pdf Pitch Representation Slides (.pdf)]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/ccrma_2011_pitch_reps.pptx Pitch Representation Slides (.pptx)]<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Analysis of religious cantillation (Computational Ethnomusicology) <br />
** Query-by-humming <br />
** Music Transcription <br />
* Tools <br />
** Marsyas <br />
** Python/NumPy/Matplotlib<br />
<br><u>Lab 4: </u><br />
* Marsyas compilation <br />
** Instructions for CCRMA Machines [http://ccrma.stanford.edu/workshops/mir2011/marsyas_ccrma2011.pdf marsyas_ccrma2011.pdf]<br />
*** SKT: If you get an error about Python.h, install the package python2.7-dev (for version 2.7).<br />
* Marsyas tour <br />
* Plotting and prototyping using the Marsyas Python bindings<br />
* Writing some C++ Marsyas code <br />
* DTW in Matlab [http://labrosa.ee.columbia.edu/matlab/dtw/ Dan Ellis DTW Matlab example] <br />
<br />
<br />
<br><u>Day 5:</u> <br />
* '''Presenters: Douglas Eck'''<br />
* [http://ccrma.stanford.edu/workshops/mir2011/ccrma_2011_pitch_reps.pdf Pitch Representation Slides (.pdf)]<br />
<br />
* [http://ccrma.stanford.edu/workshops/mir2011/CCRMA_2011_day5.pdf Day 5 Slides (.pdf)]<br />
* Music Recomendation<br />
** Overview of music recommendation. What's hard about it.<br />
** Some statistics and observations about the music industry and the need for recomendation.<br />
** Point-Counterpoint: Should we bother with content-based analysis.<br />
* Autotagging<br />
** Features for autotagging (some of this will be review, given days 1 through 4.)<br />
** Demos of clustering for different types of acoustic features.<br />
** Training Data.<br />
** Classifiers (focus on AdaBoost) <br />
** Feature selection.<br />
** Evaluation with lots of examples.<br />
* Time permitting:<br />
** Advanced features <br />
** Sparse coding<br />
** Using musical structure.<br />
<br />
<br><u>Lab 5</u><br />
<br />
See [[MIR_workshop_2011_day5_lab]] for a full description. Here is a summary:<br />
<br />
* The basics (some Python code available to help).<br />
** Calculate acoustic features on CAL500 dataset (students should have already done this.) <br />
** Read in user tag annotations from same dataset provided by UCSD.<br />
** Build similarity matrix based on word vectors derived from these annotations.<br />
** Query similarity matrix with a track to get top hits based on cosine distance.<br />
** Build second similarity matrix using acoustic features. <br />
** Query this similarity matrix with track to get top hits based on cosine distance. <br />
<br />
* Extra (I didn't write code for this, but can help students find examples).<br />
** Query the EchoNest for additional acoustic features and compare to yours. <br />
** Use the CAL500 user annotations as ground truth and evaluate your audio features (ROC curve or some precision measure). <br />
** Compare a 2D visualization of acoustic features versus UCSD user annotations.<br />
<br><br />
A File of features for CAL500 data set (simple feature vector) is in /usr/ccrma/courses/mir2011/Cal500_Features.csv<br />
<br><br />
<br><u>Bonus Lab material</u><br />
* Insert your bonus lab materials here...<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/</div>Deckhttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2011_day5_lab&diff=11936MIR workshop 2011 day5 lab2011-07-01T16:04:40Z<p>Deck: </p>
<hr />
<div>MIR Workshop 2011 Day 5 Lab on Music Recommendation<br><br />
Douglas Eck, Google<br />
<br />
<br />
<h2>Overview</h2><br />
This lab covers the construction of parts of a music recommender. Focus is placed on building a similarity matrix from data and querying that matrix based on cosine distance. <br />
Fast programmers should be able to accomplish considerably more. <br />
<br />
<br />
* The basics (some Python code available to help).<br />
** Calculate acoustic features on CAL500 dataset (students should have already done this.) <br />
** Read in user tag annotations for the same dataset provided by UCSD.<br />
** Build similarity matrix based on word vectors derived from these annotations. In my implementation the matrix is stored as a dictionary of vectors, but this is python-specific.<br />
** Query similarity matrix with a track to get top hits based on cosine distance.<br />
** Build second similarity matrix using acoustic features. <br />
** Query this similarity matrix with track to get top hits based on cosine distance. <br />
<br />
* Extra (I didn't write code for this, but can help students find examples).<br />
** Query the EchoNest for additional acoustic features and compare to yours. <br />
** Use the CAL500 user annotations as ground truth and evaluate your audio features (ROC curve or some precision measure). <br />
** Compare a 2D visualization of acoustic features versus UCSD user annotations.<br />
** Train a classifier on CAL500. Train / test splits will be generated this afternoon.<br />
<br />
Lab code is found in /usr/ccrma/courses/mir20110/cal500_new.<br />
A previous version was uploaded at /usr/ccrma/courses/mir2011/cal500 but it was using filenames from my University of Montreal lab. I renamed audio files to match those of UCSD Cal500<br />
<br />
An example call for the code provided by me:<br />
<br />
<pre><br />
%python cal500_example.py norah_jones-dont_know_why<br />
1 Cosine distance=0.0000 Track=norah_jones-dont_know_why<br />
2 Cosine distance=0.1111 Track=carole_king-youve_got_a_friend<br />
3 Cosine distance=0.1365 Track=alicia_keys-fallin<br />
4 Cosine distance=0.1387 Track=dido-here_with_me<br />
5 Cosine distance=0.1610 Track=fiona_apple-love_ridden<br />
6 Cosine distance=0.1646 Track=barry_manilow-mandy<br />
7 Cosine distance=0.1661 Track=stan_getz-corcovado_quiet_nights_of_quiet_stars<br />
8 Cosine distance=0.1696 Track=aimee_mann-wise_up<br />
9 Cosine distance=0.1758 Track=marvelettes-please_mr._postman<br />
10 Cosine distance=0.1772 Track=ben_folds_five-brick<br />
11 Cosine distance=0.1795 Track=cranberries-linger<br />
12 Cosine distance=0.1833 Track=sade-smooth_operator<br />
13 Cosine distance=0.1860 Track=john_lennon-imagine<br />
14 Cosine distance=0.1864 Track=dionne_warwick-walk_on_by<br />
15 Cosine distance=0.1918 Track=5th_dimension-one_less_bell_to_answer<br />
16 Cosine distance=0.1923 Track=carpenters-rainy_days_and_mondays<br />
17 Cosine distance=0.1932 Track=diana_ross_and_the_supremes-where_did_our_love_go<br />
18 Cosine distance=0.1939 Track=smokey_robinson_and_the_miracles-ooo_baby_baby<br />
19 Cosine distance=0.1939 Track=fleetwood_mac-say_you_love_me<br />
20 Cosine distance=0.1970 Track=rufus_wainwright-cigarettes_and_chocolate_milk<br />
</pre><br />
<br />
<br />
Functions provided by me in cal500_example.py. Students can recode as they please in any language. They are free to use as much of my code as they want.<br />
<br />
<pre><br />
def RemapFilename(infile):<br />
"""Maps a filename from Doug name to real cal500 name."""<br />
<br />
<br />
def RenameCal500(dest_directory='high_bitrate'):<br />
"""Renames cal500 files from Doug naming scheme to standard one<br />
and places them in dest_directory."""<br />
<br />
<br />
def MakeFilenameMap(infile = 'cal500_doug_filenames.txt',<br />
outfile = 'cal500_filename_map.txt'):<br />
"""Create text file mapping old (Doug) filenames to standard<br />
Cal500 filenames."""<br />
<br />
<br />
def GetKeyFromAnnotationPath(annotation_path):<br />
"""Gets key from annnotation file path, stripping off<br />
the number.<br />
# Ex: norine_braun-spanish_banks_02.txt yields<br />
# norine_braun-spanish_banks<br />
<br />
<br />
# Mapping from text values in CAL500 annotation files to (somewhat arbitrary) numeric values.<br />
ANNOTATION_MAP = {<br />
'yes': 1.0,<br />
'prominent': 1.0,<br />
'present': 0.75,<br />
'uncertain': 0.5,<br />
'no': 0.0,<br />
'none': 0.0,<br />
'5': 5/5.0,<br />
'4': 4/5.0,<br />
'3': 3/5.0,<br />
'2': 2/5.0,<br />
'1': 1/5.0,<br />
'0': 0.0<br />
}<br />
<br />
<br />
def AddTagWeightsToDictFromAnnotationFile(annotation_path, tag_weights):<br />
"""Reads tag weights into a dictionary keyed by tag name<br />
and adds them to the defaultdict tag_weights. Use key 'counter'<br />
to track number of annotations."""<br />
<br />
<br />
def BuildTagDictionary(tag_directory='annotations'):<br />
"""Builds dictionary mapping cal500 key to a weighted tag vector.<br />
Returns dictionary and our vocabulary of tags."""<br />
<br />
<br />
def BuildVectorDictionaryFromTagDictionary(tag_dict, vocabulary):<br />
"""Transforms tag dictionaries int tag vectors using vocabulary."""<br />
<br />
<br />
def CosineDistance(v1, v2):<br />
"""Calculates cosine distance using numpy."""<br />
<br />
<br />
def ScoreQuery(vector_dict, query):<br />
"""Finds nearest neigbors for query in vector_dict."""<br />
<br />
<br />
def PrintScoreDict(score_dict, query, k=20):<br />
"""Print score dictionary for a query."""<br />
<br />
<br />
# Here is the main function as called above.<br />
if __name__=='__main__':<br />
if len(sys.argv)>1:<br />
query = sys.argv[1]<br />
else:<br />
query = 'norah_jones-dont_know_why'<br />
<br />
# Build vectors from words.<br />
tag_dict, vocabulary = BuildTagDictionary()<br />
vector_dict = BuildVectorDictionaryFromTagDictionary(tag_dict, vocabulary)<br />
<br />
# Score a query.<br />
score_dict = ScoreQuery(vector_dict, query)<br />
PrintScoreDict(score_dict, query)<br />
<br />
</pre></div>Deckhttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2011&diff=11932MIR workshop 20112011-07-01T00:45:04Z<p>Deck: </p>
<hr />
<div><b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
== Logistics ==<br />
Workshop Title: '''"Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval"<br />
'''<br />
* 9-5 PM. Mon, 06/27/2011 - Fri, 07/01/2011<br />
* Instructors: <br />
- Jay LeBoeuf, [http://www.imagine-research.com Imagine Research ]<br />
- Rebecca Fiebrink, [http://www.cs.princeton.edu/~fiebrink/Rebecca_Fiebrink/welcome.html Princeton University]<br />
- Douglas Eck, Google Research [http://research.google.com Google]<br />
- Stephen Pope, [http://www.imagine-research.com Imagine Research ]<br />
- Steve Tjoa, University of Maryland / [http://www.imagine-research.com Imagine Research ]<br />
- Leigh Smith, [http://www.imagine-research.com Imagine Research ]<br />
- George Tzanetakis, [http://webhome.cs.uvic.ca/~gtzan/ University of Victoria]<br />
<br />
* Participants:<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing?<br />
This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.<br />
<br />
MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.<br />
<br />
This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.<br />
<br />
== Lectures & Labs ==<br />
<br><u>Day 1:</u> [https://ccrma.stanford.edu/workshops/mir2011/CCRMA_2011_day1.pdf Lecture 1 Slides]<br />
* '''Presenters: Jay LeBoeuf & Rebecca Fiebrink'''<br />
* CCRMA Introduction - (Carr/Sasha). CCRMA Tour.<br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Features: ZCR, Spectral moments; Scaling of feature data <br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
** IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
<br><u>Lab 1:</u> <br><br />
* [https://ccrma.stanford.edu/workshops/mir2011/Lab_1_2011.pdf Lab 1 - Basic Feature Extraction and Classification] <br><br />
* [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
* [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
Students who need a personal tutorial of Matlab or audio signal processing will split off and received small group assistance to bring them up to speed.<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab0/lab0.html Fundamentals of Matlab]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1.html Fundamentals of Digital Audio Signal Processing (FFT, STFT, Windowing, Zero-padding, 2-D Time-frequency representation)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
<br><u>Day 2:</u> [https://ccrma.stanford.edu/workshops/mir2011/CCRMA_2011_day2.pdf Lecture 2 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
* '''Presenters: Leigh Smith & Stephen Pope'''<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications (Historical: http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.1999.356)<br />
* MIR Application Design<br />
** Audio input, analysis<br />
** Statistical/perceptual processing<br />
** Data storage<br />
** Post-processing<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
* Other Feature domains<br />
** Wavelets, LPC<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
* Applications<br />
** Song clustering based on a variety of feature vectors<br />
** PCA of feature spaces using Weka<br />
<br><u>Lab 2:</u> <br />
* Feature extraction and flexible feature vectors in MATLAB, Marsyas, Aubio, libExtract<br />
* MATLAB/Weka code for sound clustering with a flexible feature vector<br />
* C++ API examples Marsyas, Aubio, libExtract - pre-built examples to read and customize<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)<br />
* Down-loads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/ODF.zip Onset Detection Function example code in Octave/Matlab]<br />
* Notes on c-API configuration<br />
** FFTW<br />
./configure --help<br><br />
./configure --enable-float<br />
** libSndFile<br />
./configure --disable-external-libs --disable-sqlite<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
<br><u>Day 3</u> <br />
* '''Presenters: Stephen Pope & Steve Tjoa'''<br />
* [http://up.stevetjoa.com/tjoa20110629ccrma.pdf Lecture and Lab 3 Slides by Steve Tjoa]<br />
* [https://ccrma.stanford.edu/workshops/mir2011/CCRMA_2011_day3.pdf Lecture 3 Slides]<br />
* Overview: 2nd-Stage Processing and Post-processing in MIR Applications<br />
* 2nd-Stage Processing<br />
** Thresholds and Data Pruning<br />
** Perceptual Mapping<br />
** Data Reduction: Averaging, GMMs, Running Averages<br />
** Feature-data-smoothing: de-spiking, sticky values, filter, etc.<br />
* Segmentation of music and non-musical audio<br />
** Segmentation based on islands of similar features<br />
** Segmentation based on regular difference peaks<br />
** Segmentation based on labeling<br />
* Post-processing: What are we doing?<br />
** Storing Feature Data: SQL, JSON, XML, etc.<br />
** Classification/Clustering/Transcription/Labeling<br />
* Classification: KNN vs SVM training and testing<br />
** SVM tools and APIs<br />
* Clustering vs Classification: Tree-based systems<br />
* Audio Transcription: Onsets and per-onset features<br />
* Other applications: source separation, similarity match, search, etc.<br />
* Classification/estimation in the presence of polyphony<br />
** Try basic approach on a musical mixture.<br />
** How well does it perform? <br />
** What do we do to improve its performance? ICA, NMF, K-SVD.<br />
** Matrix representations of data: spectrogram, chromagram, timbregram, etc.<br />
** Methods to improve NMF/K-SVD under heavy harmonic overlap<br />
* Applications<br />
** Feature vector pruning<br />
** Segmentation examples<br />
** SVMs for classification<br />
** Multipitch estimation, source separation, denoising<br />
<br><u>Lab 3:</u> <br />
* 2nd-Stage Processing<br />
* SVM tools<br />
* Classification examples<br />
<p><br />
* If you finish early, see the "bonus labs" section below.<br />
<br />
<br><u>Day 4:</u> <br />
* '''Presenters: George Tzanetakis'''<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Analysis of religious cantillation (Computational Ethnomusicology) <br />
** Query-by-humming <br />
** Music Transcription <br />
* Tools <br />
** Marsyas <br />
** Python/NumPy/Matplotlib<br />
<br><u>Lab 4: </u><br />
* Marsyas compilation <br />
** Instructions for CCRMA Machines [http://ccrma.stanford.edu/workshops/mir2011/marsyas_ccrma2011.pdf marsyas_ccrma2011.pdf]<br />
*** SKT: If you get an error about Python.h, install the package python2.7-dev (for version 2.7).<br />
* Marsyas tour <br />
* Plotting and prototyping using the Marsyas Python bindings<br />
* Writing some C++ Marsyas code <br />
* DTW in Matlab [http://labrosa.ee.columbia.edu/matlab/dtw/ Dan Ellis DTW Matlab example] <br />
<br />
<br />
<br><u>Day 5:</u> <br />
* '''Presenters: Douglas Eck'''<br />
* Music Recomendation<br />
** Overview of music recommendation. What's hard about it.<br />
** Some statistics and observations about the music industry and the need for recomendation.<br />
** Point-Counterpoint: Should we bother with content-based analysis.<br />
* Autotagging<br />
** Features for autotagging (some of this will be review, given days 1 through 4.)<br />
** Demos of clustering for different types of acoustic features.<br />
** Training Data.<br />
** Classifiers (focus on AdaBoost) <br />
** Feature selection.<br />
** Evaluation with lots of examples.<br />
* Time permitting:<br />
** Advanced features <br />
** Sparse coding<br />
** Using musical structure.<br />
<br />
<br><u>Lab 5</u><br />
<br />
See [[MIR_workshop_2011_day5_lab]] for a full description. Here is a summary:<br />
<br />
* The basics (some Python code available to help).<br />
** Calculate acoustic features on CAL500 dataset (students should have already done this.) <br />
** Read in user tag annotations from same dataset provided by UCSD.<br />
** Build similarity matrix based on word vectors derived from these annotations.<br />
** Query similarity matrix with a track to get top hits based on cosine distance.<br />
** Build second similarity matrix using acoustic features. <br />
** Query this similarity matrix with track to get top hits based on cosine distance. <br />
<br />
* Extra (I didn't write code for this, but can help students find examples).<br />
** Query the EchoNest for additional acoustic features and compare to yours. <br />
** Use the CAL500 user annotations as ground truth and evaluate your audio features (ROC curve or some precision measure). <br />
** Compare a 2D visualization of acoustic features versus UCSD user annotations.<br />
<br />
<br><u>Bonus Lab material</u><br />
* Insert your bonus lab materials here...<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/</div>Deckhttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2011&diff=11931MIR workshop 20112011-07-01T00:36:00Z<p>Deck: </p>
<hr />
<div><b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
== Logistics ==<br />
Workshop Title: '''"Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval"<br />
'''<br />
* 9-5 PM. Mon, 06/27/2011 - Fri, 07/01/2011<br />
* Instructors: <br />
- Jay LeBoeuf, [http://www.imagine-research.com Imagine Research ]<br />
- Rebecca Fiebrink, [http://www.cs.princeton.edu/~fiebrink/Rebecca_Fiebrink/welcome.html Princeton University]<br />
- Doug Eck, Google [http://www.google.com Google]<br />
- Stephen Pope, [http://www.imagine-research.com Imagine Research ]<br />
- Steve Tjoa, University of Maryland / [http://www.imagine-research.com Imagine Research ]<br />
- Leigh Smith, [http://www.imagine-research.com Imagine Research ]<br />
- George Tzanetakis, [http://webhome.cs.uvic.ca/~gtzan/ University of Victoria]<br />
<br />
* Participants:<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing?<br />
This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.<br />
<br />
MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.<br />
<br />
This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.<br />
<br />
== Lectures & Labs ==<br />
<br><u>Day 1:</u> [https://ccrma.stanford.edu/workshops/mir2011/CCRMA_2011_day1.pdf Lecture 1 Slides]<br />
* '''Presenters: Jay LeBoeuf & Rebecca Fiebrink'''<br />
* CCRMA Introduction - (Carr/Sasha). CCRMA Tour.<br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Features: ZCR, Spectral moments; Scaling of feature data <br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
** IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
<br><u>Lab 1:</u> <br><br />
* [https://ccrma.stanford.edu/workshops/mir2011/Lab_1_2011.pdf Lab 1 - Basic Feature Extraction and Classification] <br><br />
* [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
* [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
Students who need a personal tutorial of Matlab or audio signal processing will split off and received small group assistance to bring them up to speed.<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab0/lab0.html Fundamentals of Matlab]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1.html Fundamentals of Digital Audio Signal Processing (FFT, STFT, Windowing, Zero-padding, 2-D Time-frequency representation)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
<br><u>Day 2:</u> [https://ccrma.stanford.edu/workshops/mir2011/CCRMA_2011_day2.pdf Lecture 2 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
* '''Presenters: Leigh Smith & Stephen Pope'''<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications (Historical: http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.1999.356)<br />
* MIR Application Design<br />
** Audio input, analysis<br />
** Statistical/perceptual processing<br />
** Data storage<br />
** Post-processing<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
* Other Feature domains<br />
** Wavelets, LPC<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
* Applications<br />
** Song clustering based on a variety of feature vectors<br />
** PCA of feature spaces using Weka<br />
<br><u>Lab 2:</u> <br />
* Feature extraction and flexible feature vectors in MATLAB, Marsyas, Aubio, libExtract<br />
* MATLAB/Weka code for sound clustering with a flexible feature vector<br />
* C++ API examples Marsyas, Aubio, libExtract - pre-built examples to read and customize<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)<br />
* Down-loads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/ODF.zip Onset Detection Function example code in Octave/Matlab]<br />
* Notes on c-API configuration<br />
** FFTW<br />
./configure --help<br><br />
./configure --enable-float<br />
** libSndFile<br />
./configure --disable-external-libs --disable-sqlite<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
<br><u>Day 3</u> <br />
* '''Presenters: Stephen Pope & Steve Tjoa'''<br />
* [http://up.stevetjoa.com/tjoa20110629ccrma.pdf Lecture and Lab 3 Slides by Steve Tjoa]<br />
* [https://ccrma.stanford.edu/workshops/mir2011/CCRMA_2011_day3.pdf Lecture 3 Slides]<br />
* Overview: 2nd-Stage Processing and Post-processing in MIR Applications<br />
* 2nd-Stage Processing<br />
** Thresholds and Data Pruning<br />
** Perceptual Mapping<br />
** Data Reduction: Averaging, GMMs, Running Averages<br />
** Feature-data-smoothing: de-spiking, sticky values, filter, etc.<br />
* Segmentation of music and non-musical audio<br />
** Segmentation based on islands of similar features<br />
** Segmentation based on regular difference peaks<br />
** Segmentation based on labeling<br />
* Post-processing: What are we doing?<br />
** Storing Feature Data: SQL, JSON, XML, etc.<br />
** Classification/Clustering/Transcription/Labeling<br />
* Classification: KNN vs SVM training and testing<br />
** SVM tools and APIs<br />
* Clustering vs Classification: Tree-based systems<br />
* Audio Transcription: Onsets and per-onset features<br />
* Other applications: source separation, similarity match, search, etc.<br />
* Classification/estimation in the presence of polyphony<br />
** Try basic approach on a musical mixture.<br />
** How well does it perform? <br />
** What do we do to improve its performance? ICA, NMF, K-SVD.<br />
** Matrix representations of data: spectrogram, chromagram, timbregram, etc.<br />
** Methods to improve NMF/K-SVD under heavy harmonic overlap<br />
* Applications<br />
** Feature vector pruning<br />
** Segmentation examples<br />
** SVMs for classification<br />
** Multipitch estimation, source separation, denoising<br />
<br><u>Lab 3:</u> <br />
* 2nd-Stage Processing<br />
* SVM tools<br />
* Classification examples<br />
<p><br />
* If you finish early, see the "bonus labs" section below.<br />
<br />
<br><u>Day 4:</u> <br />
* '''Presenters: George Tzanetakis'''<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Analysis of religious cantillation (Computational Ethnomusicology) <br />
** Query-by-humming <br />
** Music Transcription <br />
* Tools <br />
** Marsyas <br />
** Python/NumPy/Matplotlib<br />
<br><u>Lab 4: </u><br />
* Marsyas compilation <br />
** Instructions for CCRMA Machines [http://ccrma.stanford.edu/workshops/mir2011/marsyas_ccrma2011.pdf marsyas_ccrma2011.pdf]<br />
*** SKT: If you get an error about Python.h, install the package python2.7-dev (for version 2.7).<br />
* Marsyas tour <br />
* Plotting and prototyping using the Marsyas Python bindings<br />
* Writing some C++ Marsyas code <br />
* DTW in Matlab [http://labrosa.ee.columbia.edu/matlab/dtw/ Dan Ellis DTW Matlab example] <br />
<br />
<br />
<br><u>Day 5:</u> <br />
* '''Presenters: Douglas Eck'''<br />
* Application: Recommender<br />
* Autotagging using CAL500.7<br />
<br />
<br><u>Lab 5</u><br />
<br />
See [[MIR_workshop_2011_day5_lab]] for a full description. Here is a summary:<br />
<br />
* The basics (some Python code available to help).<br />
** Calculate acoustic features on CAL500 dataset (students should have already done this.) <br />
** Read in user tag annotations from same dataset provided by UCSD.<br />
** Build similarity matrix based on word vectors derived from these annotations.<br />
** Query similarity matrix with a track to get top hits based on cosine distance.<br />
** Build second similarity matrix using acoustic features. <br />
** Query this similarity matrix with track to get top hits based on cosine distance. <br />
<br />
* Extra (I didn't write code for this, but can help students find examples).<br />
** Query the EchoNest for additional acoustic features and compare to yours. <br />
** Use the CAL500 user annotations as ground truth and evaluate your audio features (ROC curve or some precision measure). <br />
** Compare a 2D visualization of acoustic features versus UCSD user annotations.<br />
<br />
<br><u>Bonus Lab material</u><br />
* Insert your bonus lab materials here...<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/</div>Deckhttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2011_day5_lab&diff=11930MIR workshop 2011 day5 lab2011-07-01T00:33:12Z<p>Deck: </p>
<hr />
<div>MIR Workshop 2011 Day 5 Lab on Music Recommendation<br><br />
Douglas Eck, Google<br />
<br />
<br />
<h2>Overview</h2><br />
This lab covers the construction of parts of a music recommender. Focus is placed on building a similarity matrix from data and querying that matrix based on cosine distance. <br />
Fast programmers should be able to accomplish considerably more. <br />
<br />
<br />
* The basics (some Python code available to help).<br />
** Calculate acoustic features on CAL500 dataset (students should have already done this.) <br />
** Read in user tag annotations for the same dataset provided by UCSD.<br />
** Build similarity matrix based on word vectors derived from these annotations. In my implementation the matrix is stored as a dictionary of vectors, but this is python-specific.<br />
** Query similarity matrix with a track to get top hits based on cosine distance.<br />
** Build second similarity matrix using acoustic features. <br />
** Query this similarity matrix with track to get top hits based on cosine distance. <br />
<br />
* Extra (I didn't write code for this, but can help students find examples).<br />
** Query the EchoNest for additional acoustic features and compare to yours. <br />
** Use the CAL500 user annotations as ground truth and evaluate your audio features (ROC curve or some precision measure). <br />
** Compare a 2D visualization of acoustic features versus UCSD user annotations.<br />
<br />
<br />
Lab code is found in /usr/ccrma/courses/mir20110/cal500_new.<br />
A previous version was uploaded at /usr/ccrma/courses/mir2011/cal500 but it was using filenames from my University of Montreal lab. I renamed audio files to match those of UCSD Cal500<br />
<br />
An example call for the code provided by me:<br />
<br />
<pre><br />
%python cal500_example.py norah_jones-dont_know_why<br />
1 Cosine distance=0.0000 Track=norah_jones-dont_know_why<br />
2 Cosine distance=0.1111 Track=carole_king-youve_got_a_friend<br />
3 Cosine distance=0.1365 Track=alicia_keys-fallin<br />
4 Cosine distance=0.1387 Track=dido-here_with_me<br />
5 Cosine distance=0.1610 Track=fiona_apple-love_ridden<br />
6 Cosine distance=0.1646 Track=barry_manilow-mandy<br />
7 Cosine distance=0.1661 Track=stan_getz-corcovado_quiet_nights_of_quiet_stars<br />
8 Cosine distance=0.1696 Track=aimee_mann-wise_up<br />
9 Cosine distance=0.1758 Track=marvelettes-please_mr._postman<br />
10 Cosine distance=0.1772 Track=ben_folds_five-brick<br />
11 Cosine distance=0.1795 Track=cranberries-linger<br />
12 Cosine distance=0.1833 Track=sade-smooth_operator<br />
13 Cosine distance=0.1860 Track=john_lennon-imagine<br />
14 Cosine distance=0.1864 Track=dionne_warwick-walk_on_by<br />
15 Cosine distance=0.1918 Track=5th_dimension-one_less_bell_to_answer<br />
16 Cosine distance=0.1923 Track=carpenters-rainy_days_and_mondays<br />
17 Cosine distance=0.1932 Track=diana_ross_and_the_supremes-where_did_our_love_go<br />
18 Cosine distance=0.1939 Track=smokey_robinson_and_the_miracles-ooo_baby_baby<br />
19 Cosine distance=0.1939 Track=fleetwood_mac-say_you_love_me<br />
20 Cosine distance=0.1970 Track=rufus_wainwright-cigarettes_and_chocolate_milk<br />
</pre><br />
<br />
<br />
Functions provided by me in cal500_example.py. Students can recode as they please in any language. They are free to use as much of my code as they want.<br />
<br />
<pre><br />
def RemapFilename(infile):<br />
"""Maps a filename from Doug name to real cal500 name."""<br />
<br />
<br />
def RenameCal500(dest_directory='high_bitrate'):<br />
"""Renames cal500 files from Doug naming scheme to standard one<br />
and places them in dest_directory."""<br />
<br />
<br />
def MakeFilenameMap(infile = 'cal500_doug_filenames.txt',<br />
outfile = 'cal500_filename_map.txt'):<br />
"""Create text file mapping old (Doug) filenames to standard<br />
Cal500 filenames."""<br />
<br />
<br />
def GetKeyFromAnnotationPath(annotation_path):<br />
"""Gets key from annnotation file path, stripping off<br />
the number.<br />
# Ex: norine_braun-spanish_banks_02.txt yields<br />
# norine_braun-spanish_banks<br />
<br />
<br />
# Mapping from text values in CAL500 annotation files to (somewhat arbitrary) numeric values.<br />
ANNOTATION_MAP = {<br />
'yes': 1.0,<br />
'prominent': 1.0,<br />
'present': 0.75,<br />
'uncertain': 0.5,<br />
'no': 0.0,<br />
'none': 0.0,<br />
'5': 5/5.0,<br />
'4': 4/5.0,<br />
'3': 3/5.0,<br />
'2': 2/5.0,<br />
'1': 1/5.0,<br />
'0': 0.0<br />
}<br />
<br />
<br />
def AddTagWeightsToDictFromAnnotationFile(annotation_path, tag_weights):<br />
"""Reads tag weights into a dictionary keyed by tag name<br />
and adds them to the defaultdict tag_weights. Use key 'counter'<br />
to track number of annotations."""<br />
<br />
<br />
def BuildTagDictionary(tag_directory='annotations'):<br />
"""Builds dictionary mapping cal500 key to a weighted tag vector.<br />
Returns dictionary and our vocabulary of tags."""<br />
<br />
<br />
def BuildVectorDictionaryFromTagDictionary(tag_dict, vocabulary):<br />
"""Transforms tag dictionaries int tag vectors using vocabulary."""<br />
<br />
<br />
def CosineDistance(v1, v2):<br />
"""Calculates cosine distance using numpy."""<br />
<br />
<br />
def ScoreQuery(vector_dict, query):<br />
"""Finds nearest neigbors for query in vector_dict."""<br />
<br />
<br />
def PrintScoreDict(score_dict, query, k=20):<br />
"""Print score dictionary for a query."""<br />
<br />
<br />
# Here is the main function as called above.<br />
if __name__=='__main__':<br />
if len(sys.argv)>1:<br />
query = sys.argv[1]<br />
else:<br />
query = 'norah_jones-dont_know_why'<br />
<br />
# Build vectors from words.<br />
tag_dict, vocabulary = BuildTagDictionary()<br />
vector_dict = BuildVectorDictionaryFromTagDictionary(tag_dict, vocabulary)<br />
<br />
# Score a query.<br />
score_dict = ScoreQuery(vector_dict, query)<br />
PrintScoreDict(score_dict, query)<br />
<br />
</pre></div>Deckhttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2011_day5_lab&diff=11929MIR workshop 2011 day5 lab2011-07-01T00:31:55Z<p>Deck: </p>
<hr />
<div>MIR Workshop 2011 Day 5 Lab on Music Recommendation<br><br />
Douglas Eck, Google<br />
<br />
<br />
<h2>Overview</h2><br />
This lab covers the construction of parts of a music recommender. Focus is placed on building a similarity matrix from data and querying that matrix based on cosine distance. <br />
Fast programmers should be able to accomplish considerably more. <br />
<br />
<br />
* The basics (some Python code available to help).<br />
** Calculate acoustic features on CAL500 dataset (students should have already done this.) <br />
** Read in user tag annotations for the same dataset provided by UCSD.<br />
** Build similarity matrix based on word vectors derived from these annotations.<br />
** Query similarity matrix with a track to get top hits based on cosine distance.<br />
** Build second similarity matrix using acoustic features. <br />
** Query this similarity matrix with track to get top hits based on cosine distance. <br />
<br />
* Extra (I didn't write code for this, but can help students find examples).<br />
** Query the EchoNest for additional acoustic features and compare to yours. <br />
** Use the CAL500 user annotations as ground truth and evaluate your audio features (ROC curve or some precision measure). <br />
** Compare a 2D visualization of acoustic features versus UCSD user annotations.<br />
<br />
<br />
Lab code is found in /usr/ccrma/courses/mir20110/cal500_new.<br />
A previous version was uploaded at /usr/ccrma/courses/mir2011/cal500 but it was using filenames from my University of Montreal lab. I renamed audio files to match those of UCSD Cal500<br />
<br />
An example call for the code provided by me:<br />
<br />
<pre><br />
%python cal500_example.py norah_jones-dont_know_why<br />
1 Cosine distance=0.0000 Track=norah_jones-dont_know_why<br />
2 Cosine distance=0.1111 Track=carole_king-youve_got_a_friend<br />
3 Cosine distance=0.1365 Track=alicia_keys-fallin<br />
4 Cosine distance=0.1387 Track=dido-here_with_me<br />
5 Cosine distance=0.1610 Track=fiona_apple-love_ridden<br />
6 Cosine distance=0.1646 Track=barry_manilow-mandy<br />
7 Cosine distance=0.1661 Track=stan_getz-corcovado_quiet_nights_of_quiet_stars<br />
8 Cosine distance=0.1696 Track=aimee_mann-wise_up<br />
9 Cosine distance=0.1758 Track=marvelettes-please_mr._postman<br />
10 Cosine distance=0.1772 Track=ben_folds_five-brick<br />
11 Cosine distance=0.1795 Track=cranberries-linger<br />
12 Cosine distance=0.1833 Track=sade-smooth_operator<br />
13 Cosine distance=0.1860 Track=john_lennon-imagine<br />
14 Cosine distance=0.1864 Track=dionne_warwick-walk_on_by<br />
15 Cosine distance=0.1918 Track=5th_dimension-one_less_bell_to_answer<br />
16 Cosine distance=0.1923 Track=carpenters-rainy_days_and_mondays<br />
17 Cosine distance=0.1932 Track=diana_ross_and_the_supremes-where_did_our_love_go<br />
18 Cosine distance=0.1939 Track=smokey_robinson_and_the_miracles-ooo_baby_baby<br />
19 Cosine distance=0.1939 Track=fleetwood_mac-say_you_love_me<br />
20 Cosine distance=0.1970 Track=rufus_wainwright-cigarettes_and_chocolate_milk<br />
</pre><br />
<br />
<br />
Functions provided by me in cal500_example.py. Students can recode as they please in any language. They are free to use as much of my code as they want.<br />
<br />
<pre><br />
def RemapFilename(infile):<br />
"""Maps a filename from Doug name to real cal500 name."""<br />
<br />
<br />
def RenameCal500(dest_directory='high_bitrate'):<br />
"""Renames cal500 files from Doug naming scheme to standard one<br />
and places them in dest_directory."""<br />
<br />
<br />
def MakeFilenameMap(infile = 'cal500_doug_filenames.txt',<br />
outfile = 'cal500_filename_map.txt'):<br />
"""Create text file mapping old (Doug) filenames to standard<br />
Cal500 filenames."""<br />
<br />
<br />
def GetKeyFromAnnotationPath(annotation_path):<br />
"""Gets key from annnotation file path, stripping off<br />
the number.<br />
# Ex: norine_braun-spanish_banks_02.txt yields<br />
# norine_braun-spanish_banks<br />
<br />
<br />
# Mapping from text values in CAL500 annotation files to (somewhat arbitrary) numeric values.<br />
ANNOTATION_MAP = {<br />
'yes': 1.0,<br />
'prominent': 1.0,<br />
'present': 0.75,<br />
'uncertain': 0.5,<br />
'no': 0.0,<br />
'none': 0.0,<br />
'5': 5/5.0,<br />
'4': 4/5.0,<br />
'3': 3/5.0,<br />
'2': 2/5.0,<br />
'1': 1/5.0,<br />
'0': 0.0<br />
}<br />
<br />
<br />
def AddTagWeightsToDictFromAnnotationFile(annotation_path, tag_weights):<br />
"""Reads tag weights into a dictionary keyed by tag name<br />
and adds them to the defaultdict tag_weights. Use key 'counter'<br />
to track number of annotations."""<br />
<br />
<br />
def BuildTagDictionary(tag_directory='annotations'):<br />
"""Builds dictionary mapping cal500 key to a weighted tag vector.<br />
Returns dictionary and our vocabulary of tags."""<br />
<br />
<br />
def BuildVectorDictionaryFromTagDictionary(tag_dict, vocabulary):<br />
"""Transforms tag dictionaries int tag vectors using vocabulary."""<br />
<br />
<br />
def CosineDistance(v1, v2):<br />
"""Calculates cosine distance using numpy."""<br />
<br />
<br />
def ScoreQuery(vector_dict, query):<br />
"""Finds nearest neigbors for query in vector_dict."""<br />
<br />
<br />
def PrintScoreDict(score_dict, query, k=20):<br />
"""Print score dictionary for a query."""<br />
<br />
<br />
# Here is the main function as called above.<br />
if __name__=='__main__':<br />
if len(sys.argv)>1:<br />
query = sys.argv[1]<br />
else:<br />
query = 'norah_jones-dont_know_why'<br />
<br />
# Build vectors from words.<br />
tag_dict, vocabulary = BuildTagDictionary()<br />
vector_dict = BuildVectorDictionaryFromTagDictionary(tag_dict, vocabulary)<br />
<br />
# Score a query.<br />
score_dict = ScoreQuery(vector_dict, query)<br />
PrintScoreDict(score_dict, query)<br />
<br />
</pre></div>Deckhttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2011_day5_lab&diff=11928MIR workshop 2011 day5 lab2011-07-01T00:23:28Z<p>Deck: </p>
<hr />
<div>MIR Workshop 2011 Day 5 Lab on Music Recommendation<br><br />
Douglas Eck, Google<br />
<br />
<br />
<h2>Overview</h2><br />
This lab covers the construction of parts of a music recommender. Focus is placed on building a similarity matrix from data and querying that matrix based on cosine distance. <br />
Fast programmers should be able to accomplish considerably more. <br />
<br />
<br />
* The basics (some Python code available to help).<br />
** Calculate acoustic features on CAL500 dataset (students should have already done this.) <br />
** Read in user tag annotations for the same dataset provided by UCSD.<br />
** Build similarity matrix based on word vectors derived from these annotations.<br />
** Query similarity matrix with a track to get top hits based on cosine distance.<br />
** Build second similarity matrix using acoustic features. <br />
** Query this similarity matrix with track to get top hits based on cosine distance. <br />
<br />
* Extra (I didn't write code for this, but can help students find examples).<br />
** Query the EchoNest for additional acoustic features and compare to yours. <br />
** Use the CAL500 user annotations as ground truth and evaluate your audio features (ROC curve or some precision measure). <br />
** Compare a 2D visualization of acoustic features versus UCSD user annotations.<br />
<br />
<br />
<br />
Lab code is found in /usr/ccrma/courses/mir20110/cal500_new.<br />
A previous version was uploaded at /usr/ccrma/courses/mir2011/cal500 but it was using filenames from my University of Montreal lab. I renamed audio files to match those of UCSD Cal500</div>Deckhttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2011_day5_lab&diff=11927MIR workshop 2011 day5 lab2011-07-01T00:21:57Z<p>Deck: </p>
<hr />
<div><b>MIR Workshop Lab for Music Recommendation<b><br />
<b>Douglas Eck, Google<b><br />
<br />
<br />
<h2>Overview</h2><br />
This lab covers the construction of parts of a music recommender. Focus is placed on building a similarity matrix from data and querying that matrix based on cosine distance. <br />
Fast programmers should be able to accomplish considerably more. <br />
<br />
<br />
* The basics (some Python code available to help).<br />
** Calculate acoustic features on CAL500 dataset (students should have already done this.) <br />
** Read in user tag annotations from same dataset provided by UCSD.<br />
** Build similarity matrix based on word vectors derived from these annotations.<br />
** Query similarity matrix with a track to get top hits based on cosine distance.<br />
** Build second similarity matrix using acoustic features. <br />
** Query this similarity matrix with track to get top hits based on cosine distance. <br />
<br />
* Extra (I didn't write code for this, but can help students find examples).<br />
** Query the EchoNest for additional acoustic features and compare to yours. <br />
** Use the CAL500 user annotations as ground truth and evaluate your audio features (ROC curve or some precision measure). <br />
** Compare a 2D visualization of acoustic features versus UCSD user annotations.<br />
<br />
<br />
<br />
Lab code is found in /usr/ccrma/courses/mir20110/cal500_new.<br />
A previous version was uploaded at /usr/ccrma/courses/mir2011/cal500 but it was using filenames from my University of Montreal lab. I renamed audio files to match those of UCSD Cal500</div>Deckhttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2011_day5_lab&diff=11926MIR workshop 2011 day5 lab2011-07-01T00:19:49Z<p>Deck: </p>
<hr />
<div><h1>MIR Workshop 2011 Day 5 Lab</h1><br />
<h2>Douglas Eck, Google</h2><br />
<br />
<br />
<h2>Overview<h2><br />
This lab covers the construction of parts of a music recommender. Focus is placed on building a similarity matrix from data and querying that matrix based on cosine distance. <br />
Fast programmers should be able to accomplish considerably more. <br />
<br />
<br />
* The basics (some Python code available to help).<br />
** Calculate acoustic features on CAL500 dataset (students should have already done this.) <br />
** Read in user tag annotations from same dataset provided by UCSD.<br />
** Build similarity matrix based on word vectors derived from these annotations.<br />
** Query similarity matrix with a track to get top hits based on cosine distance.<br />
** Build second similarity matrix using acoustic features. <br />
** Query this similarity matrix with track to get top hits based on cosine distance. <br />
<br />
* Extra (I didn't write code for this, but can help students find examples).<br />
** Query the EchoNest for additional acoustic features and compare to yours. <br />
** Use the CAL500 user annotations as ground truth and evaluate your audio features (ROC curve or some precision measure). <br />
** Compare a 2D visualization of acoustic features versus UCSD user annotations.<br />
<br />
<br />
<br />
Lab code is found in /usr/ccrma/courses/mir20110/cal500_new.<br />
A previous version was uploaded at /usr/ccrma/courses/mir2011/cal500 but it was using filenames from my University of Montreal lab. I renamed audio files to match those of UCSD Cal500</div>Deckhttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2011&diff=11925MIR workshop 20112011-07-01T00:16:41Z<p>Deck: </p>
<hr />
<div><b>Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval</b><br />
<br />
== Logistics ==<br />
Workshop Title: '''"Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval"<br />
'''<br />
* 9-5 PM. Mon, 06/27/2011 - Fri, 07/01/2011<br />
* Instructors: <br />
- Jay LeBoeuf, [http://www.imagine-research.com Imagine Research ]<br />
- Rebecca Fiebrink, [http://www.cs.princeton.edu/~fiebrink/Rebecca_Fiebrink/welcome.html Princeton University]<br />
- Doug Eck, Google [http://www.google.com Google]<br />
- Stephen Pope, [http://www.imagine-research.com Imagine Research ]<br />
- Steve Tjoa, University of Maryland / [http://www.imagine-research.com Imagine Research ]<br />
- Leigh Smith, [http://www.imagine-research.com Imagine Research ]<br />
- George Tzanetakis, [http://webhome.cs.uvic.ca/~gtzan/ University of Victoria]<br />
<br />
* Participants:<br />
<br />
== Abstract == <br />
How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing?<br />
This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.<br />
<br />
MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.<br />
<br />
This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.<br />
<br />
Knowledge of basic digital audio principles is required. Familiarity with Matlab is desired. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.<br />
<br />
'''Workshop structure:''' The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.<br />
<br />
== Lectures & Labs ==<br />
<br><u>Day 1:</u> [https://ccrma.stanford.edu/workshops/mir2011/CCRMA_2011_day1.pdf Lecture 1 Slides]<br />
* '''Presenters: Jay LeBoeuf & Rebecca Fiebrink'''<br />
* CCRMA Introduction - (Carr/Sasha). CCRMA Tour.<br />
* Introduction to MIR (What is MIR? Why are people interested? Commercial Applications of MIR) <br />
* A brief history of MIR <br />
** See also http://www.ismir.net/texts/Byrd02.html<br />
* Overview of a basic MIR system architecture <br />
* Timing and Segmentation: Frames, Onsets <br />
* Features: ZCR, Spectral moments; Scaling of feature data <br />
* Classification: Instance-based classifiers (k-NN) <br />
* Information Retrieval Basics<br />
** Classifier evaluation (Cross-validation, training and test sets) <br />
** IR Evaluation Metrics (precision, recall, f-measure, AROC,...)<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/recall_precision.pdf Recall-Precision]<br />
*** [http://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf ROC Analysis]<br />
<br />
* Application: Instrument recognition and drum transcription / Using simple heuristics and thresholds (i.e. "Why do we need machine learning?") <br />
<br><u>Lab 1:</u> <br><br />
* [https://ccrma.stanford.edu/workshops/mir2011/Lab_1_2011.pdf Lab 1 - Basic Feature Extraction and Classification] <br><br />
* [http://ccrma.stanford.edu/workshops/mir2011/weka_lab1.pdf Getting started with Weka]<br />
* [https://ccrma.stanford.edu/workshops/mir2011/Wekinator_lab_2011.pdf Wekinator Lab]<br />
* Overview of Weka & the Wekinator <br />
** [http://www.cs.waikato.ac.nz/ml/weka/ Weka home]<br />
** [http://code.google.com/p/wekinator/ Wekinator on Google code] and [http://wiki.cs.princeton.edu/index.php/ChucK/Wekinator/Instructions instructions]<br />
Students who need a personal tutorial of Matlab or audio signal processing will split off and received small group assistance to bring them up to speed.<br />
* Background for students needing a refresher:<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/2_fft.pdf Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab0/lab0.html Fundamentals of Matlab]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab1/lab1.html Fundamentals of Digital Audio Signal Processing (FFT, STFT, Windowing, Zero-padding, 2-D Time-frequency representation)]<br />
<br />
* REMINDER: Save all your work, because you may want to build on it in subsequent labs.<br />
<br />
<br><u>Day 2:</u> [https://ccrma.stanford.edu/workshops/mir2011/CCRMA_2011_day2.pdf Lecture 2 Slides]<br />
[http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
* '''Presenters: Leigh Smith & Stephen Pope'''<br />
* Overview: Signal Analysis and Feature Extraction for MIR Applications (Historical: http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.1999.356)<br />
* MIR Application Design<br />
** Audio input, analysis<br />
** Statistical/perceptual processing<br />
** Data storage<br />
** Post-processing<br />
* Windowed Feature Extraction<br />
** I/O and analysis loops<br />
* Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)<br />
** Kinds/Domains of Features<br />
** Application Requirements (labeling, segmentation, etc.)<br />
* Time-domain features (MPEG-7 Audio book ref)<br />
** RMS, Peak, LP/HP RMS, Dynamic range, ZCR<br />
* Frequency-domain features<br />
** Spectrum, Spectral bins<br />
** Spectral measures (statistical moments)<br />
** Pitch-estimation and tracking<br />
** MFCCs<br />
* Spatial-domain features<br />
** M/S Encoding, Surround-sound Processing Frequency-dependent spatial separation, LCR sources<br />
* Other Feature domains<br />
** Wavelets, LPC<br />
* Onset-detection: Many Techniques<br />
** Time-domain differences<br />
** Spectral-domain differences<br />
** Perceptual data-warping<br />
** Adaptive onset detection<br />
* Beat-finding and Tempo Derivation<br />
** IOIs and Beat Regularity, Rubato<br />
*** Tatum, Tactus and Meter levels<br />
*** Tempo estimation<br />
** Onset-detection vs Beat-detection<br />
*** The Onset Detection Function<br />
** Approaches to beat tracking & Meter estimation<br />
*** Autocorrelation<br />
*** Beat Spectrum measures<br />
*** Multi-resolution (Wavelet)<br />
** Beat Histograms<br />
** Fluctuation Patterns<br />
** Joint estimation of downbeat and chord change<br />
* Applications<br />
** Song clustering based on a variety of feature vectors<br />
** PCA of feature spaces using Weka<br />
<br><u>Lab 2:</u> <br />
* Feature extraction and flexible feature vectors in MATLAB, Marsyas, Aubio, libExtract<br />
* MATLAB/Weka code for sound clustering with a flexible feature vector<br />
* C++ API examples Marsyas, Aubio, libExtract - pre-built examples to read and customize<br />
* Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)<br />
* Down-loads<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Reader.zip UCSB MAT 240F Reader]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Code.zip UCSB MAT 240F Code]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/MAT240F-Sounds.zip UCSB MAT 240F Sounds]<br />
** [https://ccrma.stanford.edu/workshops/mir2011/ODF.zip Onset Detection Function example code in Octave/Matlab]<br />
* Notes on c-API configuration<br />
** FFTW<br />
./configure --help<br><br />
./configure --enable-float<br />
** libSndFile<br />
./configure --disable-external-libs --disable-sqlite<br />
** CAL500 decoding<br />
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done<br />
<br><u>Day 3</u> <br />
* '''Presenters: Stephen Pope & Steve Tjoa'''<br />
* [http://up.stevetjoa.com/tjoa20110629ccrma.pdf Lecture and Lab 3 Slides by Steve Tjoa]<br />
* [https://ccrma.stanford.edu/workshops/mir2011/CCRMA_2011_day3.pdf Lecture 3 Slides]<br />
* Overview: 2nd-Stage Processing and Post-processing in MIR Applications<br />
* 2nd-Stage Processing<br />
** Thresholds and Data Pruning<br />
** Perceptual Mapping<br />
** Data Reduction: Averaging, GMMs, Running Averages<br />
** Feature-data-smoothing: de-spiking, sticky values, filter, etc.<br />
* Segmentation of music and non-musical audio<br />
** Segmentation based on islands of similar features<br />
** Segmentation based on regular difference peaks<br />
** Segmentation based on labeling<br />
* Post-processing: What are we doing?<br />
** Storing Feature Data: SQL, JSON, XML, etc.<br />
** Classification/Clustering/Transcription/Labeling<br />
* Classification: KNN vs SVM training and testing<br />
** SVM tools and APIs<br />
* Clustering vs Classification: Tree-based systems<br />
* Audio Transcription: Onsets and per-onset features<br />
* Other applications: source separation, similarity match, search, etc.<br />
* Classification/estimation in the presence of polyphony<br />
** Try basic approach on a musical mixture.<br />
** How well does it perform? <br />
** What do we do to improve its performance? ICA, NMF, K-SVD.<br />
** Matrix representations of data: spectrogram, chromagram, timbregram, etc.<br />
** Methods to improve NMF/K-SVD under heavy harmonic overlap<br />
* Applications<br />
** Feature vector pruning<br />
** Segmentation examples<br />
** SVMs for classification<br />
** Multipitch estimation, source separation, denoising<br />
<br><u>Lab 3:</u> <br />
* 2nd-Stage Processing<br />
* SVM tools<br />
* Classification examples<br />
<p><br />
* If you finish early, see the "bonus labs" section below.<br />
<br />
<br><u>Day 4:</u> <br />
* '''Presenters: George Tzanetakis'''<br />
* Features: <br />
** Monophonic Pitch Detection <br />
** Polyphonic Pitch Detection <br />
** Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma) <br />
* Analysis: <br />
** Dynamic Time Warping<br />
** Hidden Markov Models <br />
** Harmonic Analysis/Chord and Key Detection <br />
* Applications<br />
** Audio-Score Alignment <br />
** Cover Song Detection <br />
** Analysis of religious cantillation (Computational Ethnomusicology) <br />
** Query-by-humming <br />
** Music Transcription <br />
* Tools <br />
** Marsyas <br />
** Python/NumPy/Matplotlib<br />
<br><u>Lab 4: </u><br />
* Marsyas compilation <br />
** Instructions for CCRMA Machines [http://ccrma.stanford.edu/workshops/mir2011/marsyas_ccrma2011.pdf marsyas_ccrma2011.pdf]<br />
*** SKT: If you get an error about Python.h, install the package python2.7-dev (for version 2.7).<br />
* Marsyas tour <br />
* Plotting and prototyping using the Marsyas Python bindings<br />
* Writing some C++ Marsyas code <br />
* DTW in Matlab [http://labrosa.ee.columbia.edu/matlab/dtw/ Dan Ellis DTW Matlab example] <br />
<br />
<br />
<br><u>Day 5:</u> <br />
* '''Presenters: Douglas Eck'''<br />
* Application: Recommender<br />
* Autotagging using CAL500.7<br />
<br />
<br><u>Lab 5</u><br />
* The basics (some Python code available to help).<br />
** Calculate acoustic features on CAL500 dataset (students should have already done this.) <br />
** Read in user tag annotations from same dataset provided by UCSD.<br />
** Build similarity matrix based on word vectors derived from these annotations.<br />
** Query similarity matrix with a track to get top hits based on cosine distance.<br />
** Build second similarity matrix using acoustic features. <br />
** Query this similarity matrix with track to get top hits based on cosine distance. <br />
<br />
* Extra (I didn't write code for this, but can help students find examples).<br />
** Query the EchoNest for additional acoustic features and compare to yours. <br />
** Use the CAL500 user annotations as ground truth and evaluate your audio features (ROC curve or some precision measure). <br />
** Compare a 2D visualization of acoustic features versus UCSD user annotations.<br />
<br />
<br><u>Bonus Lab material</u><br />
* Insert your bonus lab materials here...<br />
* Harmony Analysis Slides / Labs<br />
** [http://ccrma.stanford.edu/workshops/mir2009/juans_lecture/6_harmony.pdf Harmony Analysis (lecture slides from Juan Bello)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-ieee-taslp08-print.pdf Chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/references/klee-lncs08.pdf Genre-specific chord recognition using HMMs (Kyogu Lee)]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.tgz Lab - download lab3.tgz]<br />
** [http://ccrma.stanford.edu/workshops/mir2009/Lab3/lab3.html Lab - Key estimation, chord recognition]<br />
<br />
== software, libraries, examples ==<br />
Applications & Environments<br />
* [http://www.mathworks.com/products/matlab/ MATLAB]<br />
* [http://www.cs.waikato.ac.nz/ml/weka/ Weka Machine Learning and Data Mining Toolbox (Standalone app / Java)] <br />
<br />
Machine Learning Libraries & Toolboxes<br />
* [http://www.ncrg.aston.ac.uk/netlab/ Netlab Pattern Recognition and Clustering Toolbox (Matlab)]<br />
* [http://www.csie.ntu.edu.tw/~cjlin/libsvm/#matlab libsvm SVM toolbox (Matlab)] <br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/Download/fg_base_view MIR Toolboxes (Matlab)]<br />
* [http://cosmal.ucsd.edu/cal/projects/CATbox/catbox.htm UCSD CatBox]<br />
Optional Toolboxes<br />
* [http://www.ofai.at/~elias.pampalk/ma/ MA Toolbox]<br />
* [http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox MIDI Toolbox] <br />
* [see also below references]<br />
* [http://marsyas.sness.net/ Marsyas]<br />
* CLAM<br />
* Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/<br />
* Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/<br />
* HTK http://htk.eng.cam.ac.uk/<br />
<br />
== Supplemental papers and information for the lectures...==<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008_notes Explanations, tutorials, code demos, recommended papers here - for each topic....]<br />
* [http://ccrma.stanford.edu/workshops/mir2011/BeatReferences.pdf A list of beat tracking references cited]<br />
<br />
== Past CCRMA MIR Workshops and lectures== <br />
* [http://ccrma.stanford.edu/wiki/MIR_workshop_2009 CCRMA MIR Summer Workshop 2009]<br />
* [http://cm-wiki.stanford.edu/wiki/MIR_workshop_2008 CCRMA MIR Summer Workshop 2008]<br />
<br />
== References for additional info == <br />
Recommended books: <br />
* Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)<br />
* Netlab by Ian T. Nabney (includes software)<br />
* Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)<br />
* Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)<br />
* Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000 <br />
<br />
Prerequisite / background material: <br />
* http://140.114.76.148/jang/books/audioSignalProcessing/<br />
* [http://ccrma.stanford.edu/workshops/mir2008/learnmatlab_sp3.pdf The Mathworks' Matlab Tutorial]<br />
* [http://ismir2007.ismir.net/proceedings/ISMIR2007_tutorial_Lartillot.pdf ISMIR2007 MIR Toolbox Tutorial]<br />
<br />
Papers:<br />
* Check out the references listed at the end of the Klapuri & Davy book<br />
* Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1<br />
<br />
Other books: <br />
* Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop <br />
* Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.<br />
* Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.<br />
* "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.<br />
* Machine Learning, Tom Mitchell, McGraw Hill, 1997.<br />
<br />
Interesting Links: <br />
* http://www.ifs.tuwien.ac.at/mir/howtos.html<br />
* http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials<br />
* http://www.music-ir.org/evaluation/tools.html<br />
* http://140.114.76.148/jang/matlab/toolbox/<br />
* http://htk.eng.cam.ac.uk/<br />
<br />
== Audio Source Material ==<br />
OLPC Sound Sample Archive (8.5 GB) [http://wiki.laptop.org/go/Sound_samples]<br />
<br />
http://www.tsi.telecom-paristech.fr/aao/en/category/database/<br />
<br />
RWC Music Database (n DVDs) [available in Stanford Music library]<br />
<br />
[http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html RWC - Sound Instruments Table of Contents]<br />
<br />
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html<br />
<br />
[http://theremin.music.uiowa.edu/MIS.html Univ or Iowa Music Instrument Samples ]<br />
<br />
https://ccrma.stanford.edu/wiki/MIR_workshop_2008_notes#Research_Databases_.2F_Collections_of_Ground_truth_data_and_copyright-cleared_music<br />
<br />
== MATLAB Utility Scripts ==<br />
* [http://ccrma.stanford.edu/~mw/ Mike's scripts] <br />
<br />
* [[Reading MP3 Files]]<br />
* [[Low-Pass Filter]]<br />
* Steve Tjoa: [http://ccrma.stanford.edu/~kiemyang/software Matlab code] (updated July 9, 2009)<br />
<br />
[[Category: Workshops]]<br />
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/</div>Deckhttps://ccrma.stanford.edu/mediawiki/index.php?title=MIR_workshop_2011_day5_lab&diff=11924MIR workshop 2011 day5 lab2011-07-01T00:15:38Z<p>Deck: Created page with 'Lab code is found in /usr/ccrma/courses/mir20110/cal500_new. A previous version was uploaded at /usr/ccrma/courses/mir2011/cal500 but it was using filenames from my University of…'</p>
<hr />
<div>Lab code is found in /usr/ccrma/courses/mir20110/cal500_new.<br />
A previous version was uploaded at /usr/ccrma/courses/mir2011/cal500 but it was using filenames from my University of Montreal lab. I renamed audio files to match those of UCSD Cal500</div>Deck