Jump to Navigation

Main menu

  • Login
Home

Secondary menu

  • [Room Booking]
  • [Wiki]
  • [Webmail]

Adaptive and interactive machine listening with minimal supervision

Date: 
Fri, 02/10/2023 - 4:30pm - 5:20pm
Location: 
CCRMA Classroom [Knoll 217]
Event Type: 
DSP Seminar
Abstract: Nowadays deep learning-based approaches have become popular tools and achieved promising results in machine listening. However, a deep model that generalizes well needs to be trained on a large amount of labeled data. Rare, fine-grained, or newly emerged classes (e.g. a rare musical instrument or a new sound effect) where large-scale data collection is hard or simply impossible are often considered out-of-vocabulary and unsupported by machine listening systems. In this thesis work, we aim to provide new perspectives and approaches to machine listening tasks with limited labeled data. Specifically, we focus on algorithms that are designed to work with few labeled data (e.g. few-shot learning) and incorporate human input to guide the machine. The goal is to develop flexible and customizable machine listening systems that can adapt to different tasks in a data-efficient way with the help of minimal human intervention.

Zoom link for presentation

Bio: Yu Wang is a Ph.D. candidate in Music Technology at the Music and Audio Research Laboratory (MARL) at New York University, advised by Prof. Juan Pablo Bello. Her research interests focus on machine learning and signal processing for music and general audio. She has interned with Adobe Research, Spotify, and Google Magenta. Before joining MARL in 2017, she was in the Music Recording and Production program at the Institute of Audio Research. She holds two M.S. degrees in Materials Science & Engineering from Massachusetts Institute of Technology (2015) and National Taiwan University (NTU) (2012), and a B.S. in Physics from NTU (2010). Yu is a guitar player and also enjoys sound engineering. Japanese math rock is her current favorite music genre.
FREE
Open to the Public
  • Calendar
  • Home
  • News and Events
    • All Events
      • CCRMA Concerts
      • Colloquium Series
      • DSP Seminars
      • Hearing Seminars
      • Guest Lectures
    • Event Calendar
    • Events Mailing List
    • Recent News
  • Academics
    • Courses
    • Current Year Course Schedule
    • Undergraduate
    • Masters
    • PhD Program
    • Visiting Scholar
    • Visiting Student Researcher
    • Workshops 2022
  • Research
    • Publications
      • Authors
      • Keywords
      • STAN-M
      • Max Mathews Portrait
    • Research Groups
    • Software
  • People
    • Faculty and Staff
    • Students
    • Alumni
    • All Users
  • User Guides
    • New Documentation
    • Booking Events
    • Common Areas
    • Rooms
    • System
  • Resources
    • Planet CCRMA
    • MARL
  • Blogs
  • Opportunities
    • CFPs
  • About
    • The Knoll
      • Renovation
    • Directions
    • Contact

Search this site:

Winter Quarter 2023

101 Introduction to Creating Electronic Sound
158/258D Musical Acoustics
220B Compositional Algorithms, Psychoacoustics, and Computational Music
222 Sound in Space
250C Interaction - Intermedia - Immersion
251 Psychophysics and Music Cognition
253 Symbolic Musical Information
264 Musical Engagement
285 Intermedia Lab
319 Research Seminar on Computational Models of Sound
320B Introduction to Audio Signal Processing Part II: Digital Filters
356 Music and AI
422 Perceptual Audio Coding
451B Neuroscience of Auditory Perception and Music Cognition II: Neural Oscillations

 

 

 

   

CCRMA
Department of Music
Stanford University
Stanford, CA 94305-8180 USA
tel: (650) 723-4971
fax: (650) 723-8468
info@ccrma.stanford.edu

 
Web Issues: webteam@ccrma

site copyright © 2009 
Stanford University

site design: 
Linnea A. Williams