[CANCELLED!] TEMPO VS. PITCH: UNDERSTANDING SELF-SUPERVISED TEMPO ESTIMATION

Giovana Morais (NYU) joins us to talk about her recent ICASSP paper. ABSTRACT: Self-supervision methods learn representations by solving pretext tasks that do not require human-generated labels, alleviating the need for time-consuming annotations. These methods have been applied in computer vision, natural language processing, environ- mental sound analysis, and recently in music information retrieval, e.g. for pitch estimation. Particularly in the context of music, there are few insights about the fragility of these models regarding differ- ent distributions of data, and how they could be mitigated. In this paper, we explore these questions by dissecting a self-supervised model for pitch estimation adapted for tempo estimation via rigor- ous experimentation with synthetic data. Specifically, we study the relationship between the input representation and data distribution for self-supervised tempo estimation.
NOTE: this event is part of the DL4MIR workshop series (ccrma-mir.github.io); guest speaker talks are open to the broader CCRMA community.