Jump to Navigation

Main menu

  • Login
Home

Secondary menu

  • [Room Booking]
  • [Wiki]
  • [Webmail]

Kitty Shi's Dissertation Defense: Computational analysis and modeling of expressive timing in music performance

Date: 
Tue, 04/20/2021 - 10:00am - 11:00am
Location: 
Zoom
Event Type: 
Guest Lecture
This thesis presents a machine learning model of expressive performance of piano music (specifically of Chopin Mazurkas) and a critical analysis of the output based upon statistical analyses of the musical scores and of recorded performances. Given the multidimensionality of the task, generating compelling computer generated interpretations of a musical score represents a formidable challenge, and a significant goal of MIR and computer music research. Here I seek to characterize the problems and suggest solutions.

Performers' distortion of notated rhythms in a musical score is a significant factor in the production of convincingly expressive music interpretations. Sometimes exaggerated, and sometimes subtle, these distortions are driven by a variety of factors, including schematic features (both structural such as phrase boundaries and surface events such as recurrent rhythmic patterns), as well as relatively rare veridical events that characterize the individuality and uniqueness of a particular piece. Performers tend to adopt similar pervasive approaches to interpreting schemas, resulting in common performance practices, while often formulating less common approaches to the interpretation of veridical events. Furthermore, some performers choose anomalous interpretations of schemas.

This thesis presents statistical analyses of the timings of recorded human performances of selected Mazurkas by Frédéric Chopin. These include a dataset of 456 expressive piano performances of historical piano rolls that I automatically translated to MIDI format, as well as timing data of acoustic recordings from an available collection. I compared these analyses to the performances of the same works generated by the neural network trained with recorded human performances of the entire corpus. This thesis demonstrates that while machine learning succeeds, to some degree, in expressive interpretation of schemata, convincingly capturing performance characteristics remains very much a work in progress.
Open to the Public
  • Calendar
  • Home
  • News and Events
    • All Events
      • CCRMA Concerts
      • Colloquium Series
      • DSP Seminars
      • Hearing Seminars
      • Guest Lectures
    • Event Calendar
    • Events Mailing List
    • Recent News
  • Academics
    • Courses
    • Current Year Course Schedule
    • Undergraduate
    • Masters
    • PhD Program
    • Visiting Scholar
    • Visiting Student Researcher
    • Workshops 2022
  • Research
    • Publications
      • Authors
      • Keywords
      • STAN-M
      • Max Mathews Portrait
    • Research Groups
    • Software
  • People
    • Faculty and Staff
    • Students
    • Alumni
    • All Users
  • User Guides
    • New Documentation
    • Booking Events
    • Common Areas
    • Rooms
    • System
  • Resources
    • Planet CCRMA
    • MARL
  • Blogs
  • Opportunities
    • CFPs
  • About
    • The Knoll
      • Renovation
    • Directions
    • Contact

Search this site:

Spring Quarter 2022

Music 101 Introduction to Creating Electronic Sounds
Music 123F Wild Sound Explorers
Music 128 Stanford Laptop Orchestra (SLOrk)
Music 220C Research Seminar in Computer-Generated Music
Music 251 Psychophysics and Music Cognition
Music 254 Computational Music Analysis
Music 257 Neuroplasticity and Musical Gaming
Music 264 Musical Engagement
Music 285 Intermedia Lab
Music 320C Audio DSP Projects in Faust and C++

 

 

 

   

CCRMA
Department of Music
Stanford University
Stanford, CA 94305-8180 USA
tel: (650) 723-4971
fax: (650) 723-8468
info@ccrma.stanford.edu

 
Web Issues: webteam@ccrma

site copyright © 2009 
Stanford University

site design: 
Linnea A. Williams