This research is concerned with timbre analysis that benefits from the innate human ability to identify and process phonemes. It is based on a consistent transliteration from musical timbre into speech, resulting in a context-independent descriptor of timbre.
There is a significant amount of recent research on cognitive neuroscience dealing with language and music intersections [Deutsch (1991, 2004), Lewman (1992), Aiello (1994), Besson & Schn (2001), Cross (2001), Lerdahl (2001), Marler (2001), Molino (2001), Patel et al (1997, 1998, 2003), Levitin (2003), Peretz (2003), Koelsch (2004), McMullen (2004)]. The kinds of musical material generally used are synthetic tone sequences or a limited repertoire belonging to the classicromantic Western common practice or popular music. We are interested in the consequences of a more extensive use of contemporary music in this type of research. To this end, we have compiled a number of pieces created in the 20th century that offer an opportunity to review widespread assumptions about the categorizations implied by the words "music" and "language". Our goal is to formalize in a systematic and intuitive manner the space in which music and language intersections occur. We developed a bi-dimensional representation of this space, allowing for a comparative analysis of basic aspects concerning music and speech in selected pieces. This work reflects our perspective as composers and our belief that the utilization of this repertoire can broaden the scope of the questions posed in the field of cognitive neuroscience.
We are interested in establishing a method for the transliteration of a music score into phonemes (phonetization), as well as the transcription of human speech into a music score. Common music notation provides symbols representing pitches in time for different instruments. Our method finds the most appropriate IPA phoneme to describe a frame of musical data, and conversely translates a frame of recorded speech into common western music notation whose performance by acoustical instruments preserves the acoustical signature and thus is perceived as human speech. This method is based on comparing spectral information from a single source to a database rendering a best match solution. Its strength depends on the size of the instrument and the phoneme databases used for comparison.
Philosophers, linguists, and musicians have documented the relationship between language and music throughout history. The fields have intersected in the study of speech intonation (the ``melody'' of an utterance, including such characteristics as pitch, stress, accent, and phrasing); linguists have used musical analogies to explain intonation patterns, and musicians have used linguistic analogies to explain melodic patterns.
This study traces the history of the interaction between speech intonation and musical melody, and will determine whether there is a relationship between speech intonation and musical melody in 19th century French and German art songs. I compare the contours of the spoken languages with the generalized contours of music written to texts in those languages by composers who were native speakers of the language. The musical information is managed using the Humdrum Toolkit, a database system for musical research developed by David Huron.
|© Copyright 2005 CCRMA, Stanford University. All rights reserved.|