Difference between revisions of "MIR workshop 2008"
(→Labs) |
(→Labs) |
||
Line 148: | Line 148: | ||
Abstract: As we anecdotally observed in yesterday's lab, several parameters can affect the quality of our classifications. | Abstract: As we anecdotally observed in yesterday's lab, several parameters can affect the quality of our classifications. | ||
− | + | # Audio files used in the training data sets. | |
− | + | # Features used in training / testing. | |
− | + | # Use of scaling for features. | |
− | + | # The size of the frames extracted from the audio. | |
We'll gain an intuitive feel for each of the effect of each of these parameters by trying some simple experiments. | We'll gain an intuitive feel for each of the effect of each of these parameters by trying some simple experiments. |
Revision as of 18:03, 6 August 2008
Contents
- 1 CCRMA Workshop: Music Information Retrieval
- 1.1 logistics
- 1.2 Abstract
- 1.3 Workshop syllabus
- 1.4 software, libraries, examples
- 1.5 Lectures
- 1.6 Labs
- 1.7 Final Projects from MIR 2008 Workshop
- 1.8 References for additional info
- 1.9 Audio Source Material
- 1.10 THE WIKI VALUE-ADD - Supplemental information for the lectures...
- 1.11 MATLAB Utility Scripts
- 1.12 ChucK
CCRMA Workshop: Music Information Retrieval
logistics
Workshop Title: "Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval"
- 9-5 PM. July 21 through August 1 2008
- Instructors: Jay LeBoeuf and Ge Wang
Abstract
This workshop will teach the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using Music Information Retrieval (MIR) algorithms.
MIR is a highly-interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to “listen” and “understand or make sense of” audio data, such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music – tempo, key, chord progressions, genre, or song structure – MIR algorithms are capable of recognizing and extracting this information, enabling systems to perform extensive sorting, searching, music recommendation, metadata generation, transcription, and even aiding/generating real-time performance.
This workshop will target students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate the myriad of exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as onset timings and chord estimations