Music and the Brain Symposium 2018
In its second decade, the annual Music and Brain Symposium brings together scholars, scientists, and artists across multiple disciplines to explore aspects of music as a human behavior. This year's symposium will broadly address aspects of musical performance and expressivity.
"Fountain" -- Michiko Theurer
Sponsored by the Scott and Annette Turow Fund
The schedule will be posted soon.
Abstracts and Speaker Bios
The Runaway Species
David Eagleman — Stanford University
Anthony Brandt — Rice University
The human drive to create makes us unique among living things. What is special about the human brain that enables us to innovate? Drawing on their book “The Runaway Species: How Human Creativity Remakes the World,” neuroscientist David Eagleman and composer Anthony Brandt examine the evolutionary tweaks that gave rise to our species’ imaginative gifts—gifts that we all share. Weaving together arts and science, Drs. Eagleman and Brandt explore the cognitive software that generates new ideas, and key facets of a creative mentality. They also examine how creativity is constrained by its time and place, and discuss whether or not there are aesthetic universals that transcend culture.
David Eagleman is a neuroscientist, an internationally bestselling author, a Guggenheim Fellow, and an adjunct professor at Stanford University. He is the writer and presenter of The Brain, an Emmy-nominated television series on PBS and BBC. Dr. Eagleman’s areas of research include sensory substitution, time perception, vision, and synesthesia; he also studies the intersection of neuroscience with the legal system, and in that capacity he directs the Center for Science and Law. Eagleman is the author of many books, including The Runaway Species, The Brain, Incognito, and Wednesday is Indigo Blue. He is also the author of a widely adopted textbook on cognitive neuroscience, Brain and Behavior, as well as a bestselling book of literary fiction, Sum, which has been translated into 32 languages, turned into two operas, and named a Best Book of the Year by Barnes and Noble. Dr. Eagleman writes for the Atlantic, New York Times, Discover Magazine, Slate, Wired, and New Scientist, and appears regularly on National Public Radio and BBC to discuss both science and literature. He has been a TED speaker, a guest on the Colbert Report, and profiled in the New Yorker magazine. He has spun several companies out of his lab, including NeoSensory, a company which uses haptics for sensory substitution and addition.
Anthony Brandt is a Professor of Composition at Rice University’s Shepherd School of Music and Artistic Director of Musiqa, two time winner of Adventurous Programming Awards from Chamber Music America & ASCAP. His compositional output includes two chamber operas, as well as works for orchestra, chamber ensemble, voice, choir, theater, dance, and art installations. Recordings of his music are available on the Albany and Crystal labels. His honors include a Koussevitzky Commission from the Library of Congress, grants from the National Endowment for the Arts, Meet-the-Composer, and the Houston Arts Alliance, and fellowships to the MacDowell and Djerassi arts colonies. He has been a visiting composer at the Bremen Musikfest, the Universidad Veracruzana, the Bowdoin International Music Festival, the Baltimore New Chamber Festival, Cleveland State University and SUNY-Buffalo, and composer-in-residence of the International Festival of Music in Morelia, Mexico and Houston’s OrchestraX. Dr. Brandt and neuroscientist David Eagleman have recently co-authored The Runaway Species: How Creativity Remakes the World (“Essential and highly pleasurable reading” – Kirkus; “Beautifully produced, illustrated and written” – Nature). Dr. Brandt has also co-authored several papers on music cognition, as well as an upcoming chapter for the Oxford Handbook of Music Psychology. He has organized three international conferences on “Exploring the Mind through Music” at Rice. Dr. Brandt has been awarded Rice University’s Phi Beta Kappa and George R. Brown teaching prizes.
Music performance as problem solving
Elaine Chew — Queen Mary University of London
Music performance, with its on-the-fly decision-making, is considered to be one of the most breathtaking feats of human intelligence. The nature of the creativity in music performance, the reasoning behind the interpretations of a piece of music, and the work performers do to create novel and moving experiences, however, largely remain a mystery. This is partly because there is little in the way of representing, conceptualizing, and talking about the work of performance. Re-framing performance as an act of problem solving, I shall show different ways to represent and conceptualize alternate solutions (interpretations) created in performance; these representations provide concrete measures of the extent to which a performer deforms musical space and time. The techniques will be shown to apply also to electrocardiographic recordings of abnormal heartbeats. Finally, I will propose a theoretical framework for reverse-engineering the thinking behind a performance.
Elaine Chew is Professor of Digital Media at Queen Mary University of London, where she is affiliated with the Centre for Digital Music. Her research centers on the mathematical modeling and computational analysis of music structures, particularly as they are shaped in expressive performance; more recently, she is applying the techniques to cardiac arrhythmia signal analysis. Her work has been recognized by NSF CAREER/PECASE awards, fellowships at the Radcliffe Institute for Advanced Study at Harvard, and a new European Research Council Advanced Grant for the project COSMOS (Computational Shaping and Modeling of Musical Structures). Professor Chew is author of numerous research articles and a Springer monograph on Mathematical and Computational Modeling of Tonality. She is also a pianist, performing widely in chamber and solo concerts where classical and post-tonal eclectic music performance is intertwined with scientific discourse. She received PhD and SM degrees in Operations Research from MIT, a BAS in Mathematical and Computational Sciences (honors) and Music Performance (distinction) from Stanford, and Fellowship and Licentiate diplomas in piano performance from Trinity College, London.
Psychological mechanisms of engaged listening and spontaneous motor interaction in music
Petr Janata — University of California, Davis
Some of the strongest emotional experiences that people have with music, such as the feeling of being in "the groove," arise in a context of engaged interactions of individuals among each other and with the music that they hear and/or create. These interactions are considered in the context of the brain's perception/action cycle and attendant psychological mechanisms such as attention and emotion. Operating in non-musicians and musicians alike, these basic mechanisms help to define music as a substrate for various sensorimotor (performative) interactions with patterned auditory environments that is accessible to the majority of the human species.
Petr Janata is a Professor in the Psychology Department and Center for Mind and Brain at UC Davis. He received his B.A. from Reed College and his Ph.D. from the University of Oregon. His research on how the human brain engages with music has examined expectation, imagery, sensorimotor coupling, memory, and emotion in relation to tonality, rhythm, and timbre. His work also emphasizes the use of models of musical structure to analyze behavioral and brain data. He is particularly interested in musical situations that elicit strong emotional experiences, such as music-evoked remembering or being in the groove. He is the recipient of two Fulbright research fellowships, a Guggenheim Fellowship, a Templeton Advanced Research Program award from the Metanexus Institute, and grant funding from the NSF, NIH, and the GRAMMY Foundation.
Lasting Impressions in the Proactive Brain
Moshe Bar -- Bar-Ilan University
It is proposed that the human brain is proactive in that it continuously generates predictions that approximate the relevant future. This proposal posits that coarse information is extracted rapidly from the input to derive analogies linking that input with representations in memory. The linked stored representations then activate the associations that are relevant in the specific context, which provides focused predictions. These predictions facilitate perception and cognition by pre-sensitizing relevant representations. In the talk I will concentrate on top-down predictions particularly in visual recognition and in the application of contextual knowledge in the human brain. This cognitive neuroscience framework provides a new hypothesis (The Lasting Primacy Hypothesis) with which to consider the purpose of memory, and can help explain a variety of phenomena, ranging from recognition to first impressions, from preferences to aesthetic evaluations, and from the brain’s ‘default mode’ to a host of mental disorders.
Professor Bar is a Director of the Gonda Multidisciplinary Brain Research Center at Bar-Ilan University and the head of the Cognitive Neuroscience Lab. Before moving to Israel to head the Gonda Multidisciplinary Brain Research Center at Bar-Ilan University, Professor Bar was an Associate Professor in Neuroscience, Psychiatry and Radiology at Harvard Medical School and the Massachusetts General Hospital. His work focuses on exploring how the brain extracts and uses contextual information to generate predictions and guide cognition efficiently, as well as characterization of the links between cognitive processing, mood and depression, with the ultimate goal of developing focused behavioral methods for the treatment of mood disorders. Prof. Bar uses neuroimaging (fMRI, MEG), psychophysical and computational methods in his research.
Decoding Mental States in Ensemble Music Performance
Caroline Palmer — McGill University
Fine temporal coordination is a prerequisite for successful ensemble musicians. Theories of coordination in groups have focused on notions of entrainment, reflected in behavioral measures (acoustics) of music performance and in neural indices (electroencephalogram, EEG). Despite its importance, little is known about the real-time dynamics of mechanisms that support temporal coordination among performing musicians. I will describe a nonlinear dynamics framework of coordination in musical ensembles, and experiments with duet performers that compare behavioral measures of synchronization (tone onsets) with measures of inter-brain coupling (EEG). The framework makes predictions for which partners (not individuals) form the best coordinative systems and how musical roles (leadership) influence coordination in groups.
Caroline Palmer is a Professor in the Department of Psychology at McGill University. She is internationally recognized for her interdisciplinary research in auditory cognition; she holds the Canada Research Chair in Cognitive Neuroscience of Performance and is Director of the NSERC-Create training network in Complex Dynamics of Brain and Behaviour. Her pioneering work uncovered temporal relationships among interpretation, expression, and meaning in music performance. Those findings have altered our understanding of how complex acoustics communicate information among musicians, speakers, and listeners. As founder of two national training networks, Palmer has translated laboratory science into industrial and health-care workplace experience across Canada and North America.
Music Composition and Performance via Neural Decoding: Reconstructing Audio Features from High Resolution (7T) fMRI
Michael Casey — Dartmouth College
The deep neural codes underlying the perception and imagination of everyday sound and vision are key to advances in brain-computer interfaces. What if a brain-computer interface could capture the image of our mind's eye, and the sound of our mind's ear, and render them for others to see and hear? Whilst this type of mind reading sounds like science fiction, recent work by computer scientists and neuroscientists (Nishimoto et al., 2011; Haxby et al., 2014) has shown that visual features corresponding to subjects' perception of images and movies can be predicted from brain imaging data alone (fMRI). In these studies, brain images are expressed as high-dimensional features, i.e. neural representational space, with dimensions corresponding to voxel locations.
Toward such neural decoding for sound and music, we present our research on learning stimulus-model based decoding of music, for both perception and imagination of the stimuli (Casey et al., 2012; Hanke et al., 2015). We use between-subject hyper-alignment (Xu et al., 2012) of neural representational spaces so that models trained on one group of subjects can decode neural data from previously unseen subjects. Somewhat remarkably, hyper-aligned models significantly out-perform both within-subject models and models that are aligned by anatomical features only. To encourage further development of such neural decoding methods, the code, stimuli, and high-resolution 7T fMRI data from one of our experiments have been publicly released via the OpenfMRI initiative.
Michael Casey is the James Wright Professor of Music, Professor of Computer Science, and Chair of the Music Department at Dartmouth College. He received his Ph.D. from the MIT Media Laboratory's Machine Listening group in 1998, whereupon he became a Research Scientist at Mitsubishi Electric Research Laboratories (MERL) followed by a Professor of Computer Science at Goldsmiths, University of London, before joining Dartmouth in 2008. His current research combines machine learning methods for audio-visual data with neuroimaging methods for brain-computer applications. His research is funded by the National Science Foundation (NSF), the Mellon Foundation, and the Neukom Institute for Computational Science, with prior research awards from Google Inc., the Engineering and Physical Sciences Research Council (EPSRC, UK), and the National Endowment for the Humanities (NEH). Michael and his family live in Hanover, NH.
Information for poster presenters
Students from CCRMA and related departments are invited to present posters on research broadly related to the theme of the symposium.
Exact time of poster session to be announced soon.
Poster dimensions: Not to exceed 44"H x 44"W
Presenters are welcome to use a layout of their choice or one of the following templates (.pptx):
LaTeX users may wish to use Jorge Herrera's LaTeX poster template
Questions about posters? Email Noah Fram, nfram ~at~ ccrma \./ stanford \./ edu
Recent past symposia
Music and the Brain 2016: Resonance
(Co-sponsored by Stanford Music and the Brain and Stanford Music and Medicine)