Learnng Sparse Feature Representations For Music Annotation And Retrieval

TitleLearnng Sparse Feature Representations For Music Annotation And Retrieval
Publication TypeConference Paper
Year of Publication2012
AuthorsNam, J., J. Herrera, M. Slaney, and J. Smith
Conference Name13th International Society for Music Information Retrieval Conference
Date Published10/2012
Conference LocationPorto, Portugal
AbstractWe present a data-processing pipeline based on sparse feature learning and describe its applications to music annotation and retrieval. Content-based music annotation and retrieval systems process audio starting with features. While commonly used features, such as MFCC, are handcrafted to extract characteristics of the audio in a succinct way, there is increasing interest in learning features automatically from data using unsupervised algorithms. We describe a systemic approach applying feature-learning algorithms to music data, in particular, focusing on a highdimensional sparse-feature representation. Our experiments show that, using only a linear classi´Čüer, the newly learned features produce results on the CAL500 dataset comparable to state-of-the-art music annotation and retrieval systems.
URLhttp://ccrma.stanford.edu/~juhan/pubs/jnam-ismir2012.pdf
Full Text 
Syndicate content