Research Projects

As an academic researcher, I do research in human-centered music information retrieval. I work on projects related to music, machine learning, sound recording, vocal expression, and human subjects research. I have worked with researchers at Stanford University's CCRMA, the UCLA Psychology department, and music tech companies including Univeral Audio, Spotify, Smule, and Shazam. I have several research publications.

images/pic03.jpg

Vocal Expression

My ISMIR 2020 late-breaking demo paper: An Evaluation Tool for Subjective Evaluation of Amateur Vocal Performances of “Amazing Grace.”

images/pic02.jpg

HitPredict: Using Spotify Data to Predict Billboard Hits

We are able to predict the Billboard success of a song with ~75% accuracy using several machine learning algorithms. See my powerpoint from ICML 2020!s

mages/pic02.jpg

Cochlear Implant Listening

Familiarity, quality and preference in Cochlear Implant Listening.

images/pic03.jpg

Music Genre Categorization

A project from my neural networks course at UCLA, using Mel-frequency Cepstral Coefficients (MFCCs) to categorize music into 4 genres. Coding was done in MATLAB.

Research Publications

Acknowledgements

Invited Talks and Posters

Performances

  • Audiovisual performance (2019). Invited solo performance, BrainMind Summit. Stanford University, USA.

  • Glorious Guilt (2019). Invited performance with the Stanford Sidelobe Laptop Orchestra. Digital Civil Society Conference. Stanford University, USA.