CCRMA
next up previous contents
Next: Physical Modeling and Digital Signal Processing Up: Research Activities Previous: Research Activities

Computer Music Hardware and Software




The CCRMA Music Kit and DSP Tools Distribution

David Jaffe and Julius Smith

New releases (V5.0+) are now made by Leigh Smith of tomandandy and Stephen Brandon at the University of Glasgow, who are porting the Music Kit to OPENSTEP, Apple's MacOsX and MacOsX-Server, Windows98, and Linux. Latest releases and progress can be found at http://www.tomandandy.com/MusicKit.

The 4.2 version of the Music Kit was released in 1997 and is available free of charge via FTP at ftp://ccrma-ftp.stanford.edu/pub/NeXT/MusicKit/. This release is compatible with NEXTSTEP software releases 3.2 and later on NeXT and Intel-based hardware. Also, Music Kit programs that are compiled under NEXTSTEP can run on OPENSTEP for Intel and NeXT hardware.

Release 4.2 is an incremental release with several significant additions:

Other Music Kit News

Until recently, we were making extensive use of the ``Frankenstein'' cards (in various forms), home-brewed DSP cards based on the Motorola EVMs. However, with the advent of the Turtle Beach Fiji and Pinnacle cards, we no longer feel it is necessary (or worth the trouble) to pursue the ``Frankenstein'' direction.

We have been planning to provide a combined sound/MIDI driver for SoundBlaster-compatible cards. We negotiated with NeXT to do this (because we needed permission to use their sound driver code) and everything was ready to happen, but then there were some legal complications that held things up, so we weren't able to get this done for the 4.2 release.

Music Kit Background

The Music Kit is an object-oriented software system for building music, sound, signal processing, and MIDI applications in the NEXTSTEP programming environment. It has been used in such diverse commercial applications as music sequencers, computer games, and document processors. Professors and students have used the Music Kit in a host of areas, including music performance, scientific experiments, computer-aided instruction, and physical modeling. The Music Kit is the first to unify the MIDI and Music V paradigms, thus combining interaction with generality. (Music V, written by Max Mathews and others at Bell Labs three decades ago, was the first widely available ``computer music compiler.'')

The NeXT Music Kit was first demonstrated at the 1988 NeXT product introduction and was bundled in NeXT software releases 1.0 and 2.0. Since the NEXTSTEP 3.0 release, the Music Kit has been distributed by CCRMA. Questions regarding the Music Kit can be sent to musickit@ccrma.stanford.edu.

The CCRMA Music Kit and DSP Tools Distribution (or ``Music Kit'' for short) is a comprehensive package that includes on-line documentation, programming examples, utilities, applications and sample score documents. It also comes with Bug56 (black hardware only), a full featured, window-oriented, symbolic debugger by Ariel Corp. for the Motorola DSP5600x signal processing chip family.

Samply Great

Christian Herbst

Samply Great, a standalone Windows application with a user-friendly graphic interface, is a track-based Sampling/Mixing programme with DSP features. Basic concepts of computer music, such as additive, subtractive and granular synthesis can be explored in a WYSIWYG manner.

The programme uses sound samples, envelopes for additive synthesis (which can be derived from the analysis of an existing sound), and noise as sound sources. Several effects, for instance volume changes, waveshaping or transposition can be applied to the whole score or each track, and also to each note of a track. The effects, as well as the sources, can be varied dynamically over the range of the score and/or each note.

All parameter curves/envelopes can be drawn with the mouse, providing an extremely intuitive working environment. If the computational load is not too great, the output can be heard in realtime (using the Windows Direct Sound API). An output file (WAVE format) is additionally created during each rendering process. The projects can be saved and loaded to and from disk. The option of exporting the whole project as ANSI C code provides the possibility of porting and compiling the project on a platform other than Windows, as well allowing post-processing and fine-tuning of the project.

More information, the executable, and the source code of the C++ library used to create the application will be available online by May 2000 at http://www-ccrma.stanford.edu/~herbst/samply_great.

Singsing

Christian Herbst

Voice teachers/pedagogues usually lack an in-depth understanding of the concepts used to analyze the singing voice, a fact which is a considerable obstacle to efficiently putting them into practice. Singsing, a Windows application with a simple graphical user interface, provides basic tools to introduce a nevertheless profound analysis of the singing voice into the process of teaching.

For pitch detection and calculation of the residual signal, Singsing uses the programme Praat and its shell script (as developed by Paul Boersma - http://www.fon.hum.uva.nl/praat) as an underlying process. The programme offers the following features: Plots of Pitch Tier, Second Order Perturbation, average wavecycle and error signal, and time-varying spectral plots, as well as spectrogrammes of the input, the residual and the vibrato tier. To be developed is an estimation of the vocal track shape.

The analysis results of each sound file are automatically written or appended to an ASCII output file, which can then be imported into other applications to calculate statistics.

More information and a windows executable file will be available online by late Summer 2000 at http://www-ccrma.stanford.edu/~herbst/singsing.

Mi_D

Tobias Kunze

Mi_D is a multi-platform shared library that offers clients a simple and unified, yet unique set of MIDI services not commonly found in existing driver interfaces. Its main design goal was to allow clients to add sophisticated MIDI support to their applications at minimal cost.

See also the Mi_D Home Page at: http://ccrma-www.stanford.edu/CCRMA/Software/mi_d/doc/

PadMaster, an Interactive Performance Environment. Algorithms and Alternative Controllers

Fernando Lopez Lezcano

PadMaster is a a real-time performance / improvisation environment currently running under the NextStep operating system. The system primarily uses the Mathews/Boie Radio Drum as a three dimensional controller for interaction with the performer, although that is no longer the only option. The Radio Drum communicates with the computer through MIDI and sends x-y position and velocity information when either of the batons hits the surface of the drum. The Drum is also polled by the computer to determine the absolute position of the batons. This information is used to split the surface of the drum into up to 30 virtual pads of variable size, each one independently programmable to react in a specific way to a hit and to the position information stream of one or more axes of control. Pads can be grouped into Scenes and the screen of the computer displays the virtual surface and gives visual feedback to the performer. Performance Pads can control MIDI sequences, playback of soundfiles, algorithms and real time DSP synthesis. The velocity of the hits and the position information can be mapped to different parameters through transfer functions. Control Pads are used to trigger actions that globally affect the performance.

The architecture of the system has been opened and it is now possible to create interfaces to other MIDI controllers such as keyboards, pedals, percussion controllers, the Lightning controller and so on. More than one interface controller can be active at the same time listening to one or more MIDI streams and each one can map gestures to the triggering and control of virtual pads. The problem of how to map different simultaneous controllers to the same visible surface has not been completely resolved at the time of this writing (having just one controller makes it easy to get simple visual feedback of the result of the gestures, something that is essential in controlling an improvisation environment). Another interface that is being currently developed does not depend on MIDI and controls the system through a standard computer graphics tablet. The surface of the tablet behaves in virtually the same way as the surface of the Radio Drum, and tablets that have pressure sensitivity open the way to three dimensional continuous control similar to that of the Radio Drum (but of course not as flexible). The advantage of this interface is the fact that it does not use MIDI bandwidth and it relies on hardware that is standard and easy to get.

Performance Pads will have a new category: Algorithmic Pads. These pads can store algorithms that can be triggered and controlled by gestures of the performer. While a graphical programming interface has not yet been developed at the time of this writing, the composer can create algorithms easily by programming them in Objective C within the constraints of a built in set of classes and objects that should be enough for most musical purposes. Any parameter of an algorithm can be linked through a transfer function to the movement of one of the axes of control. Multiple algorithms can be active at the same time and can respond in different ways to the same control information making it easy to transform simple gestures into complicated musical responses. An algorithm can also be the source of control information that can be used by other algorithms to affect their behavior.

A Dynamic Spatial Sound Movement Toolkit

Fernando Lopez Lezcano

This brief overview describes a dynamic sound movement toolkit implemented within the context of the CLM software synthesis and signal processing package. Complete details can be found at http://www-ccrma.stanford.edu/~nando/clm/dlocsig/.

dlocsig.lisp is a unit generator that dynamically moves a sound source in 2d or 3d space and can be used as a replacement for the standard locsig in new or existing CLM instruments (this is a completely rewritten and much improved version of the old dlocsig that I started writing in 1992 while I was working at Keio University in Japan).

The new dlocsig can generate spatial positioning cues for any number of speakers which can be arbitrarily arranged in 2d or 3d space. The number of output channels of the current output stream (usually defined by the :channels keyword in the enclosing with-sound) will determine which speaker arrangement is used. In pieces which can be recompiled from scratch this feature allows the composer to easily create several renditions of the same piece, each one optimized for a particular number, spatial configuration of speakers and rendering technique.

dlocsig can render the output soundfile with different techniques. The default is to use amplitude panning between adyacent speakers (between two speakers in 2d space or three speaker groups in 3d space). dlocsig can also create an Ambisonics encoded four channel output soundfile suitable for feeding into an appropriate decoder for multiple speaker reproduction. Or it can decode the Ambisonics encoded information to an arbitrary number of output channels if the speaker configuration is known in advance. In the near future dlocsig will also be able to render to stereo soundfiles with hrtf generated cues for heaphone or speaker listening environments. In all cases doppler shift is also generated as well as amplitude scaling due to distance with user-defined exponents and ratio of direct to reverberated sound.

The movement of sound sources is described through paths. These are CLOS (Common Lisp Object System) objects that hold the information needed by dlocsig to move the source in space and are independent of the unit generator itself. Paths can be reused across many calls to dlocsig and can be translated, scaled and rotated in space as needed. There are several ways to describe a path in space. Bezier paths are described by a set of discrete points in 2d or 3d space that are latter joined by smoothly curved bezier segments. This description is very compact and easy to specify as a few points can describe a complex trajectory in 3d space. Paths can also be specified in geometric terms and one such implementation (spirals) is currently provided.

The dlocsig unit generator uses the same interface as all other CLM unit generators. make-dlocsig creates a structure for a given path and returns (as multiple values) the structure and the beginning and ending samples of the note. dlocsig is the macro that gets compiled inside the run loop and localizes the samples in space.

grani, a granular synthesis instrument for CLM

Fernando Lopez Lezcano

grani.ins is a quite complete CLM (Common Lisp Music) granular synthesis instrument designed to process (ie: mangle) input soundfiles. Almost all parameters of the granulation process can be either constant numbers or envelopes so that a note generated with grani can have very complex behavioral changes over its duration. Parameters can control grain density in grains per second, grain duration, grain envelope (with up to two envelopes and an interpolating function), sampling rate conversion factor in linear or pitch scales, spatial location of grains, number of grains to generate or duration of the note, etc. Almost all the parameters have a companion "spread" parameter that defines a random spread around the central value defined by the base parameter (both can be envelopes).

The first ``grani'' instrument was originally created as an example instrument for the 1996 Summer Workshop. In its present form it has been used to teach granular synthesis in the 1998 Summer Workshop and 220a (Introduction to Sound Synthesis Course). It has become a pretty popular instrument at CCRMA and was used by its author to compose iICEsCcRrEeAaMm, a four channel tape piece that was premiered in the 1998 CCRMA Summer Concert.

Complete details can be found at: http://www-ccrma.stanford.edu/~nando/clm/grani/

ATS (Analysis/Transformation/Synthesis): a Lisp environment for Spectral Modeling

Juan Pampin

ATS is a library of Lisp functions for spectral Analysis, Transformation, and Synthesis of sounds. The Analysis section of ATS implements different partial tracking algorithms. This allows the user to decide which strategy is the best suited for a particular sound to be analyzed. Analysis data is stored as a Lisp abstraction called ``sound''. A sound in ATS is a symbolic object representing a spectral model that can be sculpted using a wide variety of transformation functions. ATS sounds can be synthesized using different target algorithms, including additive, subtractive, granular, and hybrid synthesis techniques. The synthesis engine of ATS is implemented using the CLM (Common Lisp Music) synthesis and sound processing language, and runs in real-time in many different platforms. ATS together with CLM provide an environment for sound design and composition that allows the user to explore the possibilities of Spectral Modeling in a very flexible way. The use of a high level language like Lisp presents the advantage of a symbolic representation of spectral qualities. For instance, high level traits of a sound, such as global spectral envelopes, frequency centroids, formants, vibrato patterns, etc., can be treated as symbolic objects and used to create abstract sound structures called ``spectral classes''. In a higher layer of abstraction, the concept of spectral class is used to implement predicates and procedures, conforming spectral logic operators. In terms of this this logic, sound morphing becomes a ``union'' (a dynamic interchange of features) of spectral clases that generates a particular hybrid sound instance.

For more information about ATS see http://www-ccrma.stanford.edu/~juan/ATS.html.

Spectral User Interface (SUI): real-time spectral transformations in ATS

Juan Pampin

Spectral transformations had became an important tool for electronic music composers in the last few years. While working with spectral models composers usually want to evaluate how wide a range of new sounds is available by spectral transformations of a particular source. Usually these kind of explorations have to be done step by step out of real-time due to their complexity, limiting the composer to a gradual approximation to the results. This kind of approach tends to constrain the composer's ability to combine transformations and to explore different regions of the spectral structure, finally limiting his creative work in this domain. ATS provides a Spectral User Interface (SUI) for real-time spectral transformations. Using real-time CLM capabilities, the SUI provides the user with a set of sliders that control different transformation parameters during resynthesis. In its present version the SUI provides the following spectral controllers:

Conclusions: Using ATS's SUI the composer can explore many ways of transforming spectral data during resynthesis. Transformations can not only be dynamic but can also be limited to a particular region of the spectrum by means of the TimeScale slider. Transformations can be compounded to create complex spectral results that the user can explore in real-time. On SGI platforms sliders can be controlled through MIDI so the user can use more ergonomic controllers (like fader boxes, wheels, etc.) to synchronically control several sliders.

Stanford Computer-Music Packages for Mathematica

Craig Stuart Sapp

The Webpage http://www-ccrma.stanford.edu/CCRMA/Software/SCMP contains links to various Mathematica packages dealing with computer music topics. The main package, SCMTheory, contains visualization and manipulation tools dealing with the fundamentals of digital signal processing, such as complex numbers, plotting complex domains and ranges, and modulo sequences and manipulations. The Windows package contains the definitions of various analysis windows used in short-time fourier transform analysis. The FMPlot package contains functions for plotting simple FM-synthesis spectra.

All packages run with Mathematica version 2.0 or greater, except the Windows package which requires Mathematica 3.0. Included on the SCMP main webpage are Mathematica notebooks which demonstrate various aspects of the SCMP set of packages. Also included on the SCMP main webpage are these notebooks in PDF format for viewing by those people who do not have Mathematica.

The Synthesis ToolKit (STK)

Perry R. Cook and Gary P. Scavone

STK is a set of audio signal processing C++ classes and instruments for music synthesis. You can use these classes to create programs which make cool sounds using a variety of synthesis techniques. This is not a terribly novel concept, except that STK is very portable (it's mostly platform-independent C and C++ code) AND it's completely user-extensible. So, the code you write using STK actually has some chance of working in another 5-10 years. STK currently works on SGI (Irix), Linux, NeXTStep, and Windows computer platforms. Oh, and it's free for non-commercial use. The only parts of STK that are platform-dependent concern real-time sound and MIDI input and output ... but we've taken care of that for you. The interface for MIDI input and the simple Tcl/Tk graphical user interfaces (GUIs) provided is the same, so it's easy to voice and experiment in real time using either the GUIs or MIDI.

STK isn't one particular program. Rather, STK is a set of C++ classes that you can use to create your own programs. We've provided a few example applications that demonstrate some of the ways that you could use these classes. But if you have specific needs, you will probably have to either modify the example programs or write a new program altogether. Further, the example programs don't have a fancy GUI wrapper. If you feel the need to have a "drag and drop" GUI, you probably don't want to use STK. Spending hundreds of hours making platform-dependent GUI code would go against one of the fundamental design goals of STK - platform independence. STK can generate simultaneous .snd, .wav, and .mat output soundfile formats (beside realtime sound output), so you can view your results using one of the numerous sound/signal analysis tools already available over the WWW (e.g. Snd, Cool Edit, Matlab). For those instances where a simple GUI with sliders and buttons is helpful, we use Tcl/Tk (which is freely distributed for all the STK supported platforms). A number of Tcl/Tk GUI scripts are distributed with the STK release.

Perry Cook began developing a pre-cursor to STK under NeXTStep at the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University in the early-1990s. With his move to Princeton University in 1996, he ported everything to C++, SGIs, added realtime capabilities, and greatly expanded the synthesis techniques available. With the help of Bill Putnam, Perry also made a port of STK to Windows95. Gary Scavone began using STK extensively in the summer of 1997 and completed a full port of STK to Linux early in 1998. He finished the fully compatable Windows port (using Direct Sound API) in June 1998. Numerous improvements and extensions have been made since then.

For more information about STK, see http://www-ccrma.stanford.edu/CCRMA/Software/STK/.

References

Common Lisp Music, Snd and Common Music Notation

William Schottstaedt

Common Lisp Music (CLM) is a sound synthesis package in the Music V family written primarily in Common Lisp. The instrument design language is a subset of Lisp, extended with a large number of generators: oscil, env, table-lookup, and so on. The run-time portion of an instrument can be compiled into C or Lisp code. Since CLM instruments are lisp functions, a CLM note list is just a lisp expression that happens to call those functions. Recent additions to CLM include support for real-time interactions and integration with the Snd sound editor.

Snd is a sound editor modeled loosely after Emacs and an old, sorely-missed PDP-10 editor named Dpysnd. It can accommodate any number of sounds, each with any number of channels. Each channel is normally displayed in its own window, with its own cursor, edit history, and marks; each sound has a control panel to try out various changes quickly; there is an overall stack of 'regions' that can be browsed and edited; channels and sounds can be grouped together during editing; and edits can be undone and redone without restriction.

Common Music Notation (CMN) is a music notation package written in Common Lisp; it provides its own music symbol font.

CLM, CMN, and Snd are available free, via anonymous ftp at ftp://ftp-ccrma.stanford.edu as pub/Lisp/clm-2.tar.gz, pub/Lisp/cmn.tar.gz, and pub/Lisp/snd-4.tar.gz.

SynthBuilder, SynthScript, and SynthServer--Tools for Sound Synthesis and
Signal Processing Development, Representation, and Real-Time Rendering

Julius Smith, David Jaffe, Nick Porcaro, Pat Scandalis, Scott Van Duyne, and Tim Stilson

The SynthBuilder, SynthScript, and SynthServer projects have been spun out from CCRMA to a new company Staccato Systems, Inc. The tools are currently being ported to ``all major platforms'' and focused into specific software products. Watch the Staccato website for latest details.

Common Music

Heinrich Taube

What is Common Music?

Common Music (CM) is an object-oriented music composition environment. It produces sound by transforming a high-level representation of musical structure into a variety of control protocols for sound synthesis and display: MIDI, Csound, Common Lisp Music, Music Kit, C Mix, C Music, M4C, RT, Mix and Common Music Notation. Common Music defines an extensive library of compositional tools and provides a public interface through which the composer may easily modify and extend the system. All ports of Common Music provide a text-based music composition editor called Stella. A graphical interface called Capella currently runs only on the Macintosh. See http://www-ccrma.stanford.edu/CCRMA/Software/cm/cm.html for more information.

History

Common Music began in 1989 as a response to the proliferation of different audio hardware, software and computers that resulted from the introduction of low cost processors. As choices increased it became clear that composers would be well served by a system that defined a portable, powerful and consistent interface to the myriad sound rendering possibilities. Work on Common Music began in 1989 when the author was a guest composer at CCRMA, Stanford University. Most of the system as it exists today was implemented at the Institut für Musik und Akustik at the Zentrum für Kunst und Medientechnologie in Karlsruhe, Germany, where the author worked for five years. Common Music continues to evolve today at the University of Illinois at Urbana-Champaign, where the author is now a professor of music composition. In 1996 Common Music received First Prize in the computer-assisted composition category at the 1er Concours International de Logiciels Musicaux in Bourges, France.

Implementation

Common Music is implemented in Common Lisp and CLOS and runs on a variety of computers, including NeXT, Macintosh, SGI, SUN, and i386. Source code and binary images are freely available at several internet sites. In order to compile the source code you need Common Lisp. The best implementations are commercial products but there are also several good public domain implementations available on the Internet. See http://www-ccrma.stanford.edu/CCRMA/Software/cm/cm.html for more information.

Synthesis Control

Each synthesis target is represented as a ``syntax'' in Common Music. Any combination of syntaxes can be included when the system is built from its sources. The available syntaxes are:

Synthesis Target Syntax Works on
C Mix CMIX everywhere
C Music CMUSIC everywhere
C Sound CSOUND everywhere
Common Lisp Music CLM NeXTstep, Linux, IRIX
Common Music Notation CMN everywhere
M4C M4C NeXTstep
Mix SGIMIX IRIX
MIDI MIDI everywhere
Music Kit MK NeXTstep
RT RT NeXTstep, IRIX

Whenever possible, CM sends and receives directly to and from the target. Otherwise, a file can be generated and sent to the target automatically so that the process of producing sound appears seamless and transparent.

All ports of CM support reading level 0 and 1 MIDI files and writing level 0 files. Direct-to-driver MIDI input and output is supported for the following configurations:

Mac OS 7.x MCL 2.0.1, 3.0
NeXTstep 3.2 ACL 3.2.1, 4.1; GCL 21.1; CLISP
Windows 3.1 ACL/PC

Contact

To receive email information about software releases or to track developments in CCRMA's family of Lisp music programs: CM, CLM and CMN please join cmdist@ccrma.stanford.edu by sending your request to cmdist-request@ccrma.stanford.edu.


next up previous contents
Next: Physical Modeling and Digital Signal Processing Up: Research Activities Previous: Research Activities
CCRMA CCRMA Overview
©2000 CCRMA, Stanford University. All Rights Reserved.