Difference between revisions of "MIR workshop 2014"

From CCRMA Wiki
Jump to: navigation, search
(Day 3: Machine Learning, Clustering and Classification)
(Day 3: Machine Learning, Clustering and Classification)
Line 140: Line 140:
 
=== Day 3: Machine Learning, Clustering and Classification ===
 
=== Day 3: Machine Learning, Clustering and Classification ===
 
Lecture X: Classification: Unsupervised vs. Supervised, k-means, GMM, SVM - Steve [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_ML.pdf Lecture 5 Slides]
 
Lecture X: Classification: Unsupervised vs. Supervised, k-means, GMM, SVM - Steve [http://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_ML.pdf Lecture 5 Slides]
 +
Lecture Y: Leigh Smith, Evaluation Metrics for Information Retrieval [https://ccrma.stanford.edu/workshops/mir2013/CCRMA_MIR2013_IR.pdf Slides]
  
 
Demo: iZotope Discover (Sound Similarity Search) [http://www.izotope.com/tech/cloud/mediamined.asp  Video]  
 
Demo: iZotope Discover (Sound Similarity Search) [http://www.izotope.com/tech/cloud/mediamined.asp  Video]  

Revision as of 17:28, 11 June 2014

Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval


NEW PAGE FOR 2014

Logistics

Workshop Title: Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval

Abstract

How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing? This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.

MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord progressions, genre, or song structure - MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.

This workshop is intended for: students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and ev