CCRMA is a part of the Department of Music at Stanford University. Classes and seminars taught at the center are open to registered Stanford students and visiting scholars. The facility is also available to registered Stanford students and visiting scholars for research projects which coincide with ongoing work at the center.
Prospective graduate students especially interested in the work at CCRMA should apply to the degree program at Stanford most closely aligned with their specific field of study, e.g., Music, Computer Science, Electrical Engineering, Psychology, etc. Graduate degree programs offered in music are the MA in Music, Science, and Technology; the DMA in Composition; and the PhD in Computer-Based Music Theory and Acoustics. Acceptance in music theory or composition is largely based upon musical criteria, not knowledge of computing. Admission requirements for degree programs can be obtained directly from each particular department. CCRMA does not itself offer a degree.
The Music Department offers both an undergraduate major and minor in Music, Science, and Technology (MST). The MST specialization is designed for those students with a strong interest in the musical ramifications of rapidly evolving computer technology and digital audio and in the acoustic and psychoacoustic foundations of music. The program entails a substantial research project under faculty guidance and makes use of the highly multi-disciplinary environment at CCRMA. This program can serve as a complementary major to students in the sciences and engineering. Requirements for the undergraduate programs are available from the Stanford Music Department.
For complete information on the following classes, please see the Stanford Bulletin for the current academic year. Most courses at CCRMA also have their own websites (see http://www-ccrma.stanford.edu/CCRMA/Courses/Courses.html).
Courses offered at CCRMA include:
For sophomores only. Real-time interactive performance for interested musicians combining composition, performance, MIDI instruments, and computer programming. Introduction to programming, composition of short pieces, moving beyond familiar styles. Prepares students for work in ensembles and CCRMA courses.
Composition projects demonstrate participant's own software for voicing and controlling MIDI synthesis.
The link between ``traditional'' evaluation of instrumental, orchestral, and vocal music and the revolutionary world of the electronic studio occurs in works where the two are combined. The course focuses on such linking works, beginning with Stockhausen's contributions and moving on to the products of IRCAM (Boulez, Murail, etc) and elsewhere.
Elementary physics of vibrating systems, waves, and wave motion. Time- and frequency-domain analysis of sound. Room acoustics, reverberation, and tuning systems. Acoustics of musical instruments - voice, strings, winds, and percussion. Emphasis on practical aspects of acoustics in music making. Hands-on and computer-based laboratory excercises.
Basic concepts and experiments relevant to use of sound, especially synthesized, in music. Introduction to elementary concepts; no previous background assumed. Listening to sound examples important. Emphasis on salience and importance of various auditory phenomena in music.
Survey of the development of music technology. Analysis and aesthetics of electronic music.
Topics: elementary electronics, physics of transduction and magnetic recording of sound, acoustic measurement techniques, operation and maintenance of recording equipment, recording engineering principles, microphone selection and placement, grounding and shielding techniques.
Topics: digital audio including current media, formats, editing software, post-processing techniques, noise reduction systems, advanced multi-track techniques, dynamic range processing and delay-based effects.
Independent engineering of recording sessions.
Techniques for digital sound synthesis, effects, and reverberation. Topics: summary of digital synthesis techniques (additive, subtractive, nonlinear, modulation, wavetable, granular, spectral-modeling, and physical-modeling); digital effects algorithms (phasing, flanging, chorus, pitch-shifting, and vocoding); and techniques for digital reverberation.
Use of high-level programming as a compositional aid in creating musical structures. Studies in the physical correlates to auditory perception, and review of psychoacoustic literature. Simulation of a reverberant space and control of the position of sound within the space.
Individual projects in composition, psychoacoustics, or signal processing.
Independent research projects in composition, psychoacoustics, or signal processing.
Various topics according to interest.
Explores the diverse kinds of the musical information used in sound, graphical, and analytical applications. Device-independent concepts and principles in music representation and musical research objectives (repertory analysis, performance analysis, theoretical models, similarity and stylistic simultaion) will be emphasized. Examples will be drawn primarily from Western art music.
Offers an opportunity for participants to explore issues introduced in Music 253 in greater depth and to take initiative for research projects related to a theoretical or methodological issue, a software project, or a significant analytical result.
CCRMA hosts a weekly Hearing Seminar. All areas related to perception are discussed, but the group emphasizes topics that will help us understand how the auditory system works. Speakers are drawn from the group and visitors to the Stanford area. Most attendees are graduate students, faculty, or local researchers interested in psychology, music, engineering, neurophysiology, and linguistics. To sign up for the seminar mailing list, send an e-mail request to hearing-seminar-request@ccrma.stanford.edu. Include the word subscribe in the body of that message.
Introduction to the mathematics of digital signal processing and spectrum analysis for music and audio research. Topics: complex numbers, sinusoids, spectra, aspects of audio perception, the DFT, and basic Fourier time-frequency relationships in the discrete-time case.
Topics: FFT windows; cyclic and acyclic convolution; zero padding and other spectrum analysis parameters; the overlap-add and filter-bank-summation methods for short-time Fourier analysis, modification, and resynthesis; tracking sinusoidal peaks across FFT frames; modeling time-varying spectra as sinusoids plus filtered noise; FFT-based sound synthesis; brief overviews of and introductions to transform coders, perfect-reconstruction filter banks, and wavelet transforms.
Computational models of musical instruments primarily in the wind and string families based on physical models implemented using signal processing methods. The models are designed to capture only the ``audible physics'' of musical instruments using computationally efficient algorithms. Topics: mass-spring systems and their discrete-time simulation, sampled traveling waves, lumping of losses and dispersion, delay-line interpolation methods, applications of allpass filters and lattice/ladder digital filters in acoustic models, models of winds and strings using delay lines, scattering junctions, digital filters, and nonlinear junctions implementing oscillation sources such as bow-string and reed-bore couplings.
The need for significant reduction in data rate for wide-band digital audio signal transmission and storage has led to the development of psychoacoustics-based data compression techniques. In this approach, the limitations of human hearing are exploited to remove inaudible components of audio signals. The degree of bit rate reduction achievable without sacrificing perceived quality using these methods greatly exceeds that possible using lossless techniques alone. Perceptual audio coders are currently used in many applications including Digital Radio and Television, Digital Sound on Film, and Multimedia/Internet Audio. In this course, the basic principles of perceptual audio coding will be reviewed. Current and future applications (e.g. AC-3, MPEG) will be presented. In-class demonstrations will allow students to hear the quality of state-of-the-art implementations at varying data rates and they will be required to program their own simple perceptual audio coder during the course.
Ongoing seminar for doctoral students pursuing research in DSP applied to music or audio.
CCRMA also offers a series of one- or two-week summer workshops open to participants outside the Stanford community. Information regarding courses to be offered during the coming summer can be accessed from the CCRMA WWW Home Page. Courses offered during the last few summers have included the following:
CCRMA has been using the Linux operating system for music composition, synthesis, and audio DSP research since 1996. This workshop will focus on currently available open source tools and environments for computer music research and composition using Linux. The workshop will include an overview of some of the most popular linux distributions and a brief installation clinic with specific focus on audio, midi and real-time performance (dealing with both hardware and software). Low level sound and midi drivers reviewed will include oss, oss-free, alsa and the now open source MidiShare environment. Environments for sound synthesis and composition will include the common lisp based clm system, STK (c++), pd (c) and jmax (java/c). Many other interesting tools like the snd sound editor (and its close ties to clm) will also be covered. Due to the very dynamic nature of the open source community and software base more programs will probably be included by the time the workshop starts. The workshop will also include a brief tour of sound processing and synthesis techniques. Familiarity with computers and programming languages is helpful.
This course covers analysis and synthesis of musical signals based on spectral and physical models. It is organized into morning lectures covering theoretical aspects of the models, and afternoon labs. The morning lectures present topics such as Fourier theory, spectrum analysis, the phase vocoder, digital waveguides, digital filter theory, pitch detection, linear predictive coding (LPC), and various other aspects of signal processing of interest in musical applications. The afternoon labs are hands-on sessions using SMS, the Synthesis Toolkit in C++, SynthBuilder, and other software systems and utilities. The lectures and labs are geared to a musical audience with basic experience in math and science. Most of the programs used in the workshop are available to take.
This course will introduce concepts and apply tools from cognitive psychology to the composition of virtual audio and haptic environments. In particular, the salience of various auditory and haptic phenomena to the perception and performance of music will be examined.
Just as visual artists spend time learning perspective to provoke 3D effects, composers and virtual object designers must study the perceptual sciences to create virtual environments which are convincing upon hearing and touch. We will study relevant topics from acoustics, psychology, physics and physiology. We will apply these to the design and rendering of virtual objects not for the eyes, but for the haptic and audio senses. Principles of speech, timbre, melody, pitch, texture, force, and motion perception will be addressed. Various audio and haptic effects and illusions will be demonstrated.
Morning lectures will cover these topics and also feature talks by eminent researchers and entrepreneurs working in the fields of psychoacoustics and haptics. Afternoon labs will provide practical experience in psychophysics experiment design and execution. In addition to sound synthesis tools, various haptic interfaces will be made available for experiment designs.
This introductory course will explore new approaches to interaction and improvisation between composer, performer, and computer. Topics to be discussed include performance interaction strategies (techniques of synchronization, timing, cueing, and parametric control), interactive algorithms, simulating live performance situations, tempo tracking, pitch following, and performance modeling.
Hands on participation will use the Max programming environment and Common Music, a language that runs on Macintosh, PC and Unix based platforms. It will also involve real-time interaction using the Mathews-Boie Radio Baton (MIDI conductor/controller device). This course is particularly geared towards performers with an interest in interactive performance, improvisation and other ventures into the world of music technology. Emphasis will be on group performance projects, composition of new works, and realizations of existing interactive works.
This is an introductory and fast-paced workshop in sound synthesis techniques and digital audio effects, and their implementation in the CLM (Common Lisp Music) environment. We design software instruments that implement additive synthesis, subtractive, FM, sampling, wavetables, granular, spectral and physical modeling synthesis; and digital effects algorithms such as phasing, flanging, chorus, distortion and reverberation. Introductory signal processing and perception topics will be included.
Common Lisp Music (CLM) is a public domain sound design language written on top of Common Lisp, currently running in Macintosh PowerPCs and several UNIX environments including SGI, Sun, NeXT and PC's running Linux. The workshop includes a Common Lisp lab that will teach basic Lisp programming skills. Familiarity with computers and programming languages is helpful but programming proficiency is not required.
This course introduces basic principles and techniques of algorithmic composition and covers such topics as object oriented music representation, chance composition, musical automata and pattern languages. Sound synthesis used in the course material will include MIDI and Common Lisp Music. The course will be taught using the Common Music environment on Mac and NeXT workstations.
The workshop will be divided into morning lectures and afternoon lab times. During the lab hours the students will gain a hands-on experience working through projects and examples first presented in the morning lecture. All source code and documents from the workshop are free to take. Participation in Introduction to Sound Synthesis workshop or familiarity with Lisp is necessary for taking the workshop. Other prior programming experience is useful but not required.
This course provides a comprehensive introduction to computer-assisted music research using the Humdrum Toolkit. Participants will learn to manipulate computer-based scores, tablatures, and other documents in order to solve a wide variety of analytic problems. By way of example, participants will learn to characterize common patterns of orchestration in Beethoven symphonies, examine harmony and voice-leading in Bach chorales, and investigate text/melody relationships in Gregorian chant.
Thousands of full scores will be available for processing on-line - including repertoires from various cultures, periods, and genres. The course will be of particular value to scholars contemplating graduate level or advanced music research projects. The seminar staff will provide individual advice on participants' own research projects.
All software and documentation from the workshop (including a sizeable score database) are free to take. The software is available for UNIX, DOS, OS/2 and Windows-95 (some restrictions apply). Familiarity with the `emacs' or `vi' text editors is recommended; limited knowledge of UNIX is helpful.
This weekend-length workshop is specifically designed for engineers or developers working with audio who are interested in deepening their background in digital audio theory. The workshop covers the use of the Fast Fourier Transform (FFT) in digital signal processing, focusing on practical spectrum analysis, sound synthesis with spectral models, and signal processing using the FFT.
![]() | CCRMA Overview ©2000 CCRMA, Stanford University. All Rights Reserved. |