The course begins with a review of Fourier theory, which includes a discussion of important Fourier Transform properties. The Short-Time Fourier Transform (STFT) of time-varying audio signals is presented next. Various analysis windows are discussed along with their application in audio spectrum analysis and filter design. Both interpretations of the STFT, as either an overlap-add, or a filterbank summation procedure, will be addressed. Applications of the STFT for time/frequency analysis, spectral modification, spectrum analysis trade-offs as well as time-varying and nonlinear modifications will be studied.
The next section of the class is concerned with sinusoidal modeling and additive synthesis of audio signals. Specific topics will include various analysis and synthesis techniques such as the channel and phase vocoders, and Spectral Modeling Synthesis (SMS). We will focus on the time compression and expansion applications as well as band-limited interpolation. Depending on the interest of the participants, potential topics of the course may include wavelets and MPEG data compression.
This course provides a comprehensive introduction to computer-assisted music research using the Humdrum Toolkit. Participants will learn to manipulate computer-based scores, tablatures, and other documents in order to solve a wide variety of analytic problems. By way of example, participants will learn to characterize common patterns of orchestration in Beethoven symphonies, examine harmony and voice-leading in Bach chorales, and investigate text/melody relationships in Gregorian chant.
Thousands of full scores will be available for processing on-line -- including repertoires from various cultures, periods, and genres. The course will be of particular value to scholars contemplating graduate level or advanced music research projects. The seminar staff will provide individual advice on participants' own research projects.
All software and documentation from the workshop (including a sizeable score database) are free to take. The software is available for UNIX, DOS, OS/2 and Windows-95 (some restrictions apply). Familiarity with the `emacs' or `vi' text editors is recommended; limited knowledge of UNIX is helpful.
This course will introduce concepts and apply tools from cognitive psychology to the composition of virtual audio and haptic environments. In particular, the salience of various auditory and haptic phenomena to the perception and performance of music will be examined.
Just as visual artists spend time learning perspective to provoke 3D effects, composers and virtual object designers must study the perceptual sciences to create virtual environments which are convincing upon hearing and touch. We will study relevant topics from acoustics, psychology, physics and physiology. We will apply these to the design and rendering of virtual objects not for the eyes, but for the haptic and audio senses. Principles of speech, timbre, melody, pitch, texture, force, and motion perception will be addressed. Various audio and haptic effects and illusions will be demonstrated.
Morning lectures will cover these topics and also feature talks by eminent researchers and entrepreneurs working in the fields of psychoacoustics and haptics. Afternoon labs will provide practical experience in psychophysics experiment design and execution. In addition to sound synthesis tools, various haptic interfaces will be made available for experiment designs.
Hands on participation will use the Max programming environment and Common Music, a language that runs on Macintosh, PC and Unix based platforms. It will also involve real-time interaction using the Mathews-Boie Radio Baton (MIDI conductor/controller device). This course is particularly geared towards performers with an interest in interactive performance, improvisation and other ventures into the world of music technology. Emphasis will be on group performance projects, composition of new works, and realizations of existing interactive works.
More information on the [Radio Baton]
Common Lisp Music (CLM)* is a public domain sound design language written on top of Common Lisp, currently running in Macintosh PowerPCs and several UNIX environments including SGI, Sun, NeXT and PC's running Linux. The workshop includes a Common Lisp lab that will teach basic Lisp programming skills. Familiarity with computers and programming languages is helpful but programming proficiency is not required.
More information on [CLM]
The workshop will be divided into morning lectures and afternoon lab times. During the lab hours the students will gain a hands-on experience working through projects and examples first presented in the morning lecture. All source code and documents from the workshop are free to take. Participation in Introduction to Sound Synthesis workshop or familiarity with Lisp is necessary for taking the workshop. Other prior programming experience is useful but not required.
[Students may take the full 4 week Algorithmic Composition three-part course at a reduced tuition rate of $1400. The combination of any two parts of the course will not be discounted]
The afternoon labs will be hands-on sessions using SMS and the Synthesis Toolkit in C++, and other software systems and utilities. Familiarity with engineering, mathematics, physics, and programming is a plus, but the lectures and labs will be geared to a musical audience with basic experience in math and science. Most of the programs used in the workshop will be available to take.
FOR APPLICATIONS CONTACT:
©1996 CCRMA, Stanford University. All Rights Reserved. Created and mantained by Alex Igoudin, aledin@ccrma.stanford.edu
|