My research focuses on using brain and behavioral responses to better understand how we perceive and engage with music, sound, and images. Other research interests include music information retrieval and interactions with music services; development and application of novel EEG analysis techniques; and promotion of reproducible and cross-disciplinary research through open-source software and datasets.
I am currently a Research and Development Associate with the Educational Neuroscience Initiative in the Graduate School of Education at Stanford University. I am also an Adjunct Professor at Stanford's Center for Computer Research in Music and Acoustics (CCRMA), where I conduct research as part of the Music Engagement Research Initiative (MERI). I collaborate with the Stanford Vision and Neuro-Development Lab, the Stanford Translational Auditory Research Lab, and the Dmochowski Lab at City College of New York. I was previously a Research Scientist in the Department of Otolaryngology Head & Neck Surgery at the Stanford School of Medicine, and before that a Postdoctoral Scholar at Stanford's Center for the Study of Language and Information (CSLI) and CCRMA. I completed the PhD at CCRMA in 2016. I was affiliated with the Suppes Brain Lab at CSLI from 2007–2017. I worked at Shazam from 2012–2016 and at Smule in 2018.
As a first-gen college student, woman in STEM, and Native Hawaiian, I have experienced both the challenges of feeling underrepresented in my environments and the benefits of mentorship and positive role models. I make an effort to help others through direct mentorship and community initiatives. I am actively involved with Women in Music Information Retrieval (WiMIR), where I am a founding co-organizer of both the WiMIR Mentoring Program (now in its fourth year) and WiMIR Workshop (now in its second year), and serve on the board of the International Society for Music Information Retrieval (ISMIR). I am also active with the First-Gen/Low-Income program at Stanford University, where as a graduate student and postdoc I helped organize their mentoring program for two years.
May 6, 2020 | Neural correlation study published
Our study "Natural Music Evokes Correlated EEG Responses Reflecting Temporal Structure and Beat" is now available (open access) in NeuroImage! The paper is part of a special issue on Naturalistic Imaging. We computed inter-subject and stimulus-response EEG correlation in the time and frequency domains to investigate how listeners process intact and scrambled real-world music. Our findings suggest interesting insights into the role of novelty, surprise, and music's temporal framework in engaged listening!
May 6, 2020 | Joining EdNeuro
As of the end of March, I am now a Research and Development Associate with the Educational Neuroscience Initiative in the Graduate School of Education! I'm excited to continue working in the realm of EEG as both a researcher and mentor.
August 2, 2019 | Research updates
It's been a very productive year so far being focused back on (academic) research!
Some long-term music/auditory neuroscience projects are moving forward:
- Our study on neural correlation during natural music listening is now available as a preprint. We have also made a major update to the accompanying open EEG dataset in order to make it easier to use and more compatible with other datasets.
- A co-author preprint on classification of frequency following responses to music and speech stimuli (led by Steven Losorelli), and accompanying open dataset, are available while the manuscript is under revision.
- A major update to MatClassRSA, our EEG classification toolbox (led by Bernard Wang), is also coming soon.
In other music research, I'm happy to be supervising various student projects:
- A paper on collaborative playlists (led by So Yeon Park) has been accepted to the ISMIR2019 conference.
- Next week at SMPC, Jay Appaji will present perceptual results from our study on processing of complex rhythms. We look forward to analyzing the accompanying EEG data soon!
- Camille Noufi wrote a short paper on accent classification in amateur singers, which was presented at the Machine Learning for Music Discovery (ML4MD) Workshop at ICML2019.
- It was also fun to return to my own Shazam research with help from Brandi Frisbie and Elena Georgieva for an invited talk at the ML4MD Workshop.
Finally, in vision research:
- A co-author EEG paper on processing of directional motion by children and adults (led by Catherine Manning) is now published in Developmental Cognitive Neuroscience.
- A co-author short paper comparing neural network and EEG representations of object categories (led by Nathan Kong) has been accepted to the 2019 Conference on Cognitive Computational Neuroscience; a longer manuscript is coming soon.
September 17, 2018 | Moving to Smule
I have made a career change from academia to industry! I am now Music Research Lead at Smule.
September 1, 2017 | Moving to Otolaryngology
I have joined the Department of Otolaryngology in the Stanford School of Medicine as a Research Scientist.
August 7, 2017 | See you at ISMIR
I'll be attending the 18th International Society for Music Information Retrieval Conference (ISMIR2017) in Suzhou, China from October 23–27. The MERI group produced three accepted full papers for the conference, with some late-breaking submissions likely to follow!
June 22, 2017 | See you at SMPC
I'll be attending the SMPC conference at UC San Diego at the end of July, along with several members of the MERI group. We'll be presenting various talks and posters, which are listed on the MERI Publications page.
Stanford Music and the Brain symposium, Saturday July 15
CCRMA is hosting a Music and the Brain symposium on Saturday, July 15. I'll be speaking about some current research projects of the Music Engagement Research Initiative (MERI). The event is currently at capacity, but waitlist registration is available here.
March 22, 2017 | Shazam paper published
Our Frontiers paper "Characterizing Listener Engagement with Popular Songs Using Large-Scale Music Discovery Data" looks at when during a song people tend to perform Shazam queries. It's published as part of the Research Topic titled Bridging Music Informatics with Music Cognition. Accompanying the paper is a dataset containing the query dates and offsets (time in song that a query was performed) for the over 188 million Shazam queries analyzed in the study, available for download from the Stanford Digital Repository.
March 1, 2017 | See you at OHBM
I'll be presenting the poster "Factors Determining Temporal Reliability of Ongoing EEG Responses to Naturalistic Music" at the OHBM Conference coming up in Vancouver in June.