CCRMA also offers a series of one- or two-week summer workshops open to participants outside the Stanford community. Information regarding courses to be offered during the coming summer can be accessed at http://ccrma.stanford.edu/workshops/. Courses offered during the last few summers have included the following:
CCRMA has been using the Linux operating system for music composition, synthesis, and audio DSP research since 1996. This workshop will focus on currently available open source tools and environments for computer music research and composition using Linux. The workshop will include an overview of some of the most popular linux distributions and a brief installation clinic with specific focus on audio, midi and real-time performance (dealing with both hardware and software). Low level sound and midi drivers reviewed will include oss, oss-free, alsa. Environments for sound synthesis and composition will include the Common Lisp based clm system, STK (C++), and pd (C). Many other interesting tools like the snd sound editor (and its internal scheme programming environment) will also be covered. Due to the very dynamic nature of the open source community and software base more programs will probably be included by the time the workshop starts. The workshop will also include a brief tour of sound processing and synthesis techniques. Familiarity with computers and programming languages is helpful.
This course will cover analysis and synthesis of sounds based on spectral and physical models. Models and methods for synthesizing real-world sounds as well as musical sounds will be presented. The course will be organized into morning lectures covering theoretical aspects of the models, and afternoon labs. The morning lectures will present topics such as Fourier theory, spectrum analysis, the phase vocoder, digital waveguides, digital filter theory, pitch detection, linear predictive coding (LPC), high-level feature extraction, and various other aspects of signal processing of interest in sound applications.
The afternoon labs will be hands-on sessions using SMS and the Synthesis ToolKit in C++, and other software systems and utilities. Familiarity with engineering, mathematics, physics, and programming is a plus, but the lectures and labs will be geared to a musical audience with basic experience in math and science. Most of the programs used in the workshop will be available to take home.
Given the short duration of the workshop and the broad spectrum of topics to cover, the lectures will necessarily be fairly high level in nature. However, a full complement of in-depth readings will be provided for those who wish to investigate the details of the material. Also, the last two days of the workshop will include a more detailed treatment of some advanced topics and the corresponding afternoon labs will give the students a chance to solve some specific problems of their interest.
This workshop integrates programming, electronics, interaction design, audio, and interactive music. Focus will be on hands-on applications using sensors and microprocessors in conjunction with real-time DSP to make music. Specific technologies will include C programming for Atmel AVR microcontrollers, pd and/or Max/MSP for music synthesis, and sensors including force-sensitive resistors, bend sensors, accelerometers, IR range finders, etc. Participants will design and build working prototypes using a kit that can be taken home at the end of the workshop. Further issues to be explored will include modes and mappings in computer music, exercises in invention, and applications of sensors and electronics to real-time music. The course will be augmented by a survey of existing controllers and pieces of interactive music.
This workshop is intended for:
The workshop will consist of half-day supervised lab sessions, and half-day lectures, classroom exercises and discussions. Classroom sessions will feature live demos and/or concerts of interactive music and instruments. Participants are encouraged (but by no means required) to bring their own laptop computers with any music software/hardware they already use.
This workshop will explore the design of haptic musical interface systems, which provide force-feedback to the performer, in addition to producing synthesized sound. Each day will consist of a morning lecture, during which the simulation of tactile and kinesthetic cues will be covered, and haptic musical interfaces designed by other researchers will be studied and presented to the class. An afternoon lab will follow, in which participants will build their own interface, and experiment with programming haptic cues, appropriate to various musical instrument performance techniques and corresponding sound synthesis.
Perceptual audio coders are currently used in many applications including Digital Radio and Television, Digital Sound on Film, Multimedia/Internet Audio, Portable Devices, and Electronic Music Distribution (EMD). This Workshop integrates digital signal processing, psychoacoustics, and programming to provide the basis for building a simple perceptual audio coding system. The first part of the workshop addresses the basic principles of perceptual audio coding. In the second part, design choices applied in state-of-the-art audio coding schemes, e.g. AC-3; MPEG Layers I, II, and III (MP3); MPEG AAC; MPEG-4 are presented. In-class demonstrations will allow students to hear the quality of state-of-the-art implementations at varying data rates and they will be required to program their own simple perceptual audio coder during the workshop. This Workshop is intended for:
The workshop will consist of half-day lectures, half-day supervised lab sessions, and classroom exercises and discussions. In addition to addressing basic theory and implementations, classroom sessions will feature state-of-the-art audio coding demos. Participants are encouraged (but by no means required) to bring their own laptop computers. Knowledge of basic digital audio principles and C programming is expected.
Digital signal processing methods for audio effects used in mixing and mastering will be covered. Topics include techniques for dynamic range compression, reverberation and room impulse response measurement, equalization and filtering, and panning and spatialization, with attention given to digital emulation of analog processors and implementation of time varying effects. Among the effects studied will be single-band and multiband compressors, limiters, noise gates, de-essers, feedback delay network and convolutional reverberators, flangers and phasors, parametric and linear-phase equalizers, wah-wah and envelope-following filters, and the Leslie.
The course material will be presented in daily lecture sessions with laboratory exercises interspersed. The lecture sessions will concentrate on theoretical issues in the design of digital audio effects, and are complemented by laboratory work in which students will develop effects algorithms of their own design.
The course is geared for musicians and recording engineers with an engineering background, and for engineers and computer scientists with an interest in music technology. An exposure to digital signal processing, including familiarity with digital filtering and the Fourier Transform is helpful. Some knowledge of Matlab and/or a modest amount of C programming experience are also helpful for the laboratory exercises.
This 3-day summit on Audio over Networks is an exploration of the state-of-the-art in ethernet-based professional audio networks. Developers, engineers, musicians and others interested in the growing practice of high-resolution audio over ethernets will gather to focus on the new technology. The scope includes IP-based systems and systems with dedicated protocols.
A 1998 AES Whitepaper on "Networking Audio and Music Using Internet2 and Next-Generation Internet Capabilities" expressed a vision of the future and challenges that lay ahead. Six years later, with technical developments continuing, musical collaborations of various kinds have been tested and the Internet has evolved. Predicted application areas which are now taking off include audio production, music education, broadening musical participation, and scientific and engineering data representation (sonification). The summit offers an opportunity to compare today's reality with what was foreseen and to look ahead to what's next.
The summit is a "neck-ties removed" working group that brings together academic and commercial interests, developers and users, and audio specialists and network engineers. The program includes hands-on demonstrations in the Banff concert and recording facilities, a "how-to" covering representative open-source software-based systems, demos of products, and will comprise presentations, a tutorial and a panel discussion.
Continued topics from the 1998 vision of audio over next-generation networks include current and future quality of service, implications of end-to-end design, cost and complexity of bridge devices, formats and adherence to audio industry standards, and scalability requirements. New topics will include but are not limited to Internet signal processing, user studies, and new artistic forms.
|© Copyright 2005 CCRMA, Stanford University. All rights reserved.|