DDSP: Audio and Music Processing with PyTorch
This workshop is an introduction to audio and music processing with an emphasis on signal processing and machine learning. Participants will learn to build tools to analyze and manipulate digital audio signals with PyTorch, an efficient machine learning framework used both in academia and industry. Both theory and practice of digital audio processing will be discussed with hands-on exercises on algorithm implementation. These concepts will be applied to various topics in music information retrieval, an interdisciplinary research field for processing music-related data. No pre-requisites, but some knowledge of python is assumed.
In-person (CCRMA, Stanford) and online enrollment options available during registration (see red button above). Students will receive the same teaching materials and have access to the same tutorials in either format. However, students will gain access to more in-depth, hands-on 1:1 instructor discussion and feedback when taking the course in-person.
Schedule
- Morning: the Discrete Fourier Transform
- Afternoon: spectral feature extraction
- Lab: supervising additive and subtractive audio synthesis with PyTorch
Day 2: Audio effects and filter design
- Morning: digital filter theory
- Afternoon: filter implementation and analysis
- Lab: parameter learning for IIR and FIR filter design with PyTorch
Day 3: Beat, rhythm, and tempo
- Morning: beat tracking and rhythm analysis
- Afternoon: non-linear resonance and gradient frequency neural networks (GrFNN)
- Lab: beat finding with a GrFNN in PyTorch
- Morning: pitch representations and detection
- Afternoon: music transcription and source separation
- Lab: key estimation and chord recognition with Hidden Markov Models (HMM)
Day 5: Music information retrieval and machine learning
- Morning: regression, clustering and classification
- Afternoon: dataset/model preparation
- Lab: music genre classification using deep neural representations with Pytorch
About the instructors
Iran R. Roman holds a PhD from CCRMA. He currently is a theoretical neuroscientist and machine listening scientist at New York University’s Music and Audio Research Laboratory. Iran is a passionate instructor, with extensive experience teaching artificial intelligence and deep learning. His industry experience includes deep learning engineering internships at Plantronics in 2017, Apple in 2018 and 2019, Oscilloscape in 2020, and Tesla in 2021. Iran’s research has focused on using deep learning for auditory scene analysis and human action understanding.
Chuyang Chen is a student and research assistant at New York University’s Music and Audio Research Laboratory. With a background in music technology, computer science, and electrical engineering, Chuyang is passionate about building machine listening systems using artificial intelligence, signal processing, and mathematical modeling techniques. His past research topics include beat tracking, music similarity, urban acoustics, and audio-visual analysis.
scholarship opportunity:
https://docs.google.com/forms/d/e/1FAIpQLSdL4LWoX5EpYUEp0UMFUhhmgMWOHkd8VlF70G9BK8e3-AfX2w/viewform