Difference between revisions of "MIR workshop 2009"

From CCRMA Wiki
Jump to: navigation, search
(Lectures & Labs 2009 - WORK IN PROGRESS - DRAFT ONLY)
(Lectures & Labs 2009 - WORK IN PROGRESS - DRAFT ONLY)
Line 114: Line 114:
 
<br><u>Lab 1</u>
 
<br><u>Lab 1</u>
 
* [http://ccrma.stanford.edu/workshops/mir2008/Lab%201%20-%20Playing%20with%20audio%20slices.pdf Lab 1 -"Playing with audio slices"] - Jay to trim this lab down considerably.   
 
* [http://ccrma.stanford.edu/workshops/mir2008/Lab%201%20-%20Playing%20with%20audio%20slices.pdf Lab 1 -"Playing with audio slices"] - Jay to trim this lab down considerably.   
 +
* Onset detection
 +
[http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/lab/lab2/lab2_3.m Onset Time-domain method (lab2_3.m)]
 +
[http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/lab/lab2/lab2_4.m Frequency-domain method: lab2_4.m]
 +
[http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/lab/lab2/lab2_5.m Phase-based method: lab2_5.m]
 +
 +
 
* Background for students needing a refresher:
 
* Background for students needing a refresher:
 
** [http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/lab/lab1/lab1.html Fundamentals of Digital Audio Signal Processing (FFT, STFT, Windowing, Zero-padding, 2-D Time-frequency representation)]
 
** [http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/lab/lab1/lab1.html Fundamentals of Digital Audio Signal Processing (FFT, STFT, Windowing, Zero-padding, 2-D Time-frequency representation)]

Revision as of 10:19, 18 June 2009

Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval

logistics

Workshop Title: "Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval"

Abstract

How would you "Google for audio", provide music recommendations based your MP3 files, or have a computer "listen" and understand what you are playing?

This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.

MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to "listen" and "understand or make sense of" audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music - tempo, key, chord pro