Music and the Brain Symposium 2017: Engagement

Saturday, July 15, 2017 | 9:30am–3pm
CCRMA (The Knoll), 660 Lomita Drive, Stanford University [directions] [map]

FREE and open to the public (waitlist only—registration is full).
Click here to join the waitlist

Sponsored by the Scott and Annette Turow Fund


9:30amCheck-in / coffee
9:50amOpening remarks
10:00amEngagement is in the brain
Jacek Dmochowski, City College of New York
10:45amChallenges in engaging musicians, creative coders and listeners in an open-source music generation project
Douglas Eck, Google
11:45amPlaying music and pictures
Sageev Oore, Saint Mary's University (Canada) / Google
12:15pmMusic, engagement, and the brain
Blair Kaneshiro, Stanford University
12:55pmClosing remarks
1:00pmLunch and poster session
3:00pmEnd of event

Speaker abstracts and bios

Engagement is in the brain
Jacek Dmochowski, City College of New York

It is well known that humans are limited in their ability to self-report their cognitive states. Given that all experience stems from the nervous system, measurements of brain activity have the potential to characterize and decode mental states. In this talk I will describe a set of experiments whose findings suggest that the temporal properties of neural responses to naturalistic stimuli predict engagement of the subject. I will show that the level of similarity between responses of multiple subjects to the same stimulus indexes population-level behaviors. I will then show that the correlation between a dynamic stimulus and its neural response reflects both attentional and task variables. To conclude I will preview next directions of this line of research.

Jacek Dmochowski is an Assistant Professor in the Department of Biomedical Engineering at the City College of New York. His research is focused on developing new techniques for decoding brain states and modulating neural activity. He received his Ph.D. in telecommunications from the Institut National de la Recherche Scientifique (Montreal, Canada) in 2008, and was awarded the Canadian Governor General’s Academic Gold Medal. Previously to joining the faculty at CCNY, he was a Research Associate in the lab of Anthony Norcia at Stanford University (2013-2015), and a Post-Doctoral Fellow in the lab of Lucas Parra at the City College of New York (2008-2013).

Challenges in engaging musicians, creative coders and listeners in an open-source music generation project
Douglas Eck, Google

I'll discuss progress on Magenta, a project from the Google Brain Team focused on generating music and art using deep learning. One goal of Magenta is to use open source to engage with several key audiences: musicians, artists, creative coders and listeners. I'll describe some specific challenges in this effort, and will tie those challenges into recent papers we've published in the areas of art generation (SketchRNN) and music generation (NSynth and PerformanceRNN). I hope to get feedback from attendees on what are the best steps to take going forward to best engage these creative communities.

Douglas Eck is a Research Scientist at Google working in the areas of music, art and machine learning. Currently he is leading the Magenta Project, a Google Brain effort to generate music, video, images and text using deep learning and reinforcement learning. One of the primary goals of Magenta is to better understand how machine learning algorithms can learn to produce more compelling media based on feedback from artists, musicians and consumers. Doug led the Search, Recommendations and Discovery team for Play Music from the product's inception as Music Beta by Google through its launch as a subscription service. Before joining Google in 2010, Doug was an Associate Professor in Computer Science at University of Montreal (MILA lab) where he worked on rhythm and meter perception, machine learning models of music performance, and automatic annotation of large audio data sets.

Playing music and pictures
Sageev Oore, Saint Mary's University (Canada) / Google

How do visuals and music and engagement interact with one another? Let’s find out experientially with an uncontrolled experiment that involves live improvised music combined with some interactive audience participation. No quantitative measurements will be made, but all participants will be encouraged to reflect on their qualitative subjective experience. The only analysis will be post hoc, during the Q&A following the presentation.

Sageev Oore completed an undergraduate degree in Mathematics (Dalhousie), and MSc and PhD degrees in Computer Science (University of Toronto) working with Geoffrey Hinton. He studied piano with both classical and jazz teachers from schools including Dalhousie, Juilliard, UBC and York University (Toronto), and has performed as soloist with orchestras both as a classical pianist and as a jazz improviser. His academic research has spanned from minimally-supervised learning for robot localization to adaptive real-time control of 3D graphical models. Together with his brother Dani, he co-created a duo instrumental CD combining classical art songs with improvisation. Recently, Sageev’s long-standing interest in combining machine learning and music surpassed his long-standing resistance to that same topic. Sageev is a professor of computer science at Saint Mary’s University (Canada), and is currently a visiting research scientist on the Magenta team (led by Douglas Eck) at Google Brain, working on application of deep learning approaches to music-related data.

Music, engagement, and the brain
Blair Kaneshiro, Stanford University

The Music Engagement Research Initiative is a multidisciplinary research group aimed at better understanding human engagement with music through data ranging from cortical responses to user data from social media applications. This talk will focus on the group’s current neuroscience research. Here, we study engagement with real-world musical works ranging from pop to classical to minimalist, as well as with compositionally controlled stimuli. I will discuss our strategies for manipulating often complex, naturalistic stimuli to probe specific musical features thought to drive engagement; and will highlight findings that are beginning to emerge consistently across experiments. The talk will conclude with a discussion of potential extensions of the current research, including personalized experiences, audiovisual engagement, and connections to large-scale data.

Blair Kaneshiro is a Postdoctoral Scholar at Stanford University's Center for Computer Research in Music and Acoustics (CCRMA) and Center for the Study of Language and Information (CSLI). She is currently supervising the Music Engagement Research Initiative (MERI) group at CCRMA. Her research focuses on musical engagement, analysis of neural responses to naturalistic stimuli, and development of open-source research tools and datasets. Blair completed the PhD in Computer-Based Music Theory and Acoustics (supervised by Jonathan Berger and Anthony Norcia) at CCRMA in 2016 and holds an MS in Electrical Engineering, MA in Music, Science, and Technology, and BA in Music, all from Stanford. From 2012-2016 she additionally worked as part of the R&D team at music tech company Shazam.


Engaging with Reich: Using inter-subject correlations to find patterns of engagement with early minimalism
Tysen Dauer, Blair Kaneshiro, Duc T. Nguyen, Nick Gang, and Jonathan Berger

Hybrid encoding-decoding of stimulus features and cortical responses during natural music listening
Nick Gang, Blair Kaneshiro, Jonathan Berger, and Jacek P. Dmochowski

Action monitoring in turn-taking piano duets recorded by dual-EEG
Madeline Huberth, Tysen Dauer, Irán Román, Chryssie Nanou, Wisam Reid, Nick Gang, Matthew Wright, and Takako Fujioka

Factors determining temporal reliability of ongoing EEG responses to naturalistic music
Blair Kaneshiro, Duc T. Nguyen, Jacek P. Dmochowski, Anthony M. Norcia, and Jonathan Berger

A GUI-based MATLAB tool for auditory experiment design and creation
Duc T. Nguyen and Blair Kaneshiro

Snaps and embodied engagement
Dani Oore and Sageev Oore

Hot days and cool songs: Impact of local weather on music choice
Vidya Rangasayee and Blair Kaneshiro

The Taal Autism Dance Program: A dance therapy program for autistic children
Karanvir Singh and Srishti Birla

Music genre classification by lyrics using a hierarchical attention network
Alexandros Tsaptsinos

MatClassRSA: A Matlab toolbox for EEG classification and RSA visualization
Bernard C. Wang, Anthony M. Norcia, and Blair Kaneshiro


Program organizers: Jonathan Berger, Blair Kaneshiro, Zhengshan Shi, Nette Worthey

Recent past symposia

Music and the Brain 2016: Music Information Retrieval and Data Science

Music and the Brain 2016: Resonance
(Co-sponsored by Stanford Music and the Brain and Stanford Music and Medicine)

Music and the Brain 2014: Music, Transcendence, and Spirituality

Music and the Brain 2013: Hearing Voices