Dissertation Defense: Affective Analysis and Synthesis of Laughter

Fri, 12/06/2013 - 3:30pm - 5:00pm
Event Type: 
Laughter is a universal human response to emotional stimuli. Though the production mechanism of laughter may seem crude when compared to other modes of vocalization such as speech and singing, the resulting auditory signal is nonetheless expressive. That is, laughter triggered by different social and emotional contexts is characterized by distinctiveness in auditory features that implicate certain state and attitude of the laughing person. By implementing  prototypes for interactive laughter synthesis and conducting crowdsourced experiments on the synthesized laughter stimuli, this project investigates how acoustic features may give rise to emotional meaning.

Part I provides a new approach for interactive laughter synthesis that prioritizes expressiveness. Our synthesis model, implemented in ChucK, offers three levels of representation: the “transcription” mode requires specifying precise values of all control parameters, the “instrument” mode allows users to freely trigger and control laughter within the instrument’s capacities, and the “agent” mode semi-automatically generates laughter according to its predefined characteristic tendency. Modified versions of this model has served as an instrument for the laptop orchestra, as well as a stimulus generator for conducting perception experiments.

Part II describes a series of experiments conducted to understand (1) how acoustic features affect listeners’ perception of emotions in synthesized laughter, and (2) to what extent the observed relationships between features and emotions are laughter specific. To explore the first question, a few chosen features are varied systematically to measure their impact on the perceived intensity and valence of emotions. To explore the second question, we intentionally eliminate timbral and pitch-contour cues that are essential to our recognition of laughter in order to gauge the extent to which our acoustic features are specific to the domain of laughter.

By focusing on the affective dimensions of laughter, this work complements prior works on laughter synthesis that have primarily emphasized the acceptability criteria. Moreover, by collecting listeners’ response to synthesized laughter stimuli, this work attempts to establish a causal link between acoustic features and emotional meaning that is difficult to achieve when using real laughter sounds. 

 Jieun Oh is a PhD candidate at CCRMA, working with Prof. Ge Wang. Her research interests include new music-making paradigms (laptop and mobile phone ensembles), crowdsourcing for conducting experiments in auditory perception, and, more broadly, how music and technology change the way people interact.

Open to the Public
Syndicate content