Research Projects

I do research in human-centered music information retrieval-- combining music, machine learning, and cognition. For my PhD, I am advised by Dr. Brian McFee and Dr. Pablo Ripollés. I have collaborated with researchers at NYU, Stanford, and UCLA, and audio technology companies including Univeral Audio, Spotify, Smule, and Shazam.

I have several research publications. I have a GitHub page.

images/pic02.jpg

Instrument Design for a Laptop Opera

My Stanford team's NIME paper, discussing the creation of The Furies: A new LaptOpera.

images/pic02.jpg

HitPredict: Using Spotify Data to Predict Billboard Hits

We are able to predict the Billboard success of a song with ~75% accuracy using several machine learning algorithms.

Music Visualizer

A music visualizer I built for my Music, Computing, and Design course with Ge Wang at Stanford.

images/pic02.jpg

BassBoost - Chord Estimation With Inversions In Beatles Songs

Adding bass information to major/minor chord estimates generated by a machine-learning-based method.

images/pic03.jpg

Vocal Expression

My ISMIR 2020 late-breaking demo paper: An Evaluation Tool for Subjective Evaluation of Amateur Vocal Performances of “Amazing Grace.”

mages/pic02.jpg

Cochlear Implant Listening

Familiarity, quality and preference in Cochlear Implant Listening.

Research Publications

Acknowledgements

Invited Talks and Posters

Performances