Retrieving musical information from neural data: how cognitive features enrich acoustic ones
Various features, from low-level acoustics, to higher-level
statistical regularities, to memory associations, contribute
to the experience of musical enjoyment and pleasure. Re-
cent work suggests that musical surprisal, that is, the un-
expectedness of a musical event given its context, may di-
rectly predict listeners’ experiences of pleasure and enjoy-
ment during music listening. Understanding how surprisal
shapes listeners’ preferences for certain musical pieces has
implications for music recommender systems, which are
typically content- (both acoustic or semantic) or metadata-
based. Here we test a recently developed computational
algorithm, called the Dynamic-Regularity Extraction (D-
REX) model, that uses Bayesian inference to predict the
surprisal that humans experience while listening to music.
We demonstrate that the brain tracks musical surprisal as
modeled by D-REX by conducting a decoding analysis on
the neural signal (collected through magnetoencephalogra-
phy) of participants listening to music. Thus, we demon-
strate the validity of a computational model of musical sur-
prisal, which may remarkably inform the next generation
of recommender systems. In addition, we present an open-
source neural dataset which will be available for future re-
search to foster approaches combining MIR with cognitive
neuroscience, an approach we believe will be a key strat-
egy in characterizing people’s reactions to music