CCRMA
next up previous contents
Next: Physical Modeling and Digital Signal Processing (Past) Up: Past Research Activities Previous: Past Research Activities

Computer Music Hardware and Software (Past)




SMSPlus: Post-Processed Real-Time SMS Instruments in CLM (January 1998)

Celso Aguiar

This project is an adaptation of Xavier Serra's Spectral Modeling Synthesis (SMS) technique for compositional purposes. It provides an SMS sound composition environment integrating several tools: First, the sound is analyzed from a Unix shell using Serra's C programs. A graphical interface (SMSEditor, from an Objective C prototype by Serra) has been greatly enhanced in order to display the resulting files in a three-dimensional waterfall plot. After the analysis is done, several routines support reading and writing of SMS files from inside MatLab (cmex files) and the post-processing and normalization of these files. Once analysis and post-processing are done, a series of routines and instruments integrating Lisp, CLM (Bill Schottstaedt) and C, are used for the resynthesis of the sound. The resynthesis employs the Inverse FFT algorithm (Xavier Rodet) which Xavier Serra and I programmed in the '94 Summer Workshop at CCRMA. The resynthesis programs run in real-time.

The MusiCloth Project (February 1999)

Lonny Chu

The MusiCloth project is a study in the design and implementation of a performance environment for computer music that utilizes a graphical display along with a physical interface. The conceptual model for the MusiCloth is a large tapestry which the performer manipulates through large hand and arm motions and which produces MIDI output based on the performer's actions. Ultimately, the visual display should be implemented on a large, high-definition display so that the performer can stand before it, as if standing in front of a tapestry. The performer would then manipulate areas of the tapestry through hand and arm motions. This design would allow for both large, sweeping motions in addition to smaller, more precise control over smaller sections of the display. As a measure of flexibility, the graphical design of the display, along with its corresponding mappings to performance input and MIDI output, can be implemented using custom-designed overlays created by the composer. Currently, this project exists as a simple prototype to be run on a Power Macintosh G3. Eventually, however, the project should be ported to a large display system such as the Information Mural in the Stanford Computer Science department.

The CCRMA Music Kit and DSP Tools Distribution (May 1996)

David Jaffe and Julius Smith

The Music Kit is an object-oriented software system for building music, sound, signal processing, and MIDI applications in the NEXTSTEP programming environment. It has been used in such diverse commercial applications as music sequencers, computer games, and document processors. Professors and students have used the Music Kit in a host of areas, including music performance, scientific experiments, computer-aided instruction, and physical modeling. The Music Kit is the first to unify the MIDI and Music V paradigms, thus combining interaction with generality. (Music V, written by Max Mathews and others at Bell Labs three decades ago, was the first widely available ``computer music compiler.'')

The NeXT Music Kit was first demonstrated at the 1988 NeXT product introduction and was bundled in NeXT software releases 1.0 and 2.0. Since the NEXTSTEP 3.0 release, the Music Kit has been distributed by CCRMA. Questions regarding the Music Kit can be sent to musickit@ccrma.stanford.edu.

The CCRMA Music Kit and DSP Tools Distribution (or ``Music Kit'' for short) is a comprehensive package that includes on-line documentation, programming examples, utilities, applications and sample score documents. The package also comes with Bug56, a full featured, window-oriented, symbolic debugger by Ariel Corp. for the Motorola DSP5600x signal processing chip family.

Source code is available for everything except Bug56. (The low-level DSP and MIDI drivers are available only for NEXTSTEP-Intel.) This means researchers and developers may study the source or even customize the Music Kit and DSP Tools to suit their needs. Enhancements can be sent to musickit@ccrma.stanford.edu to have them considered for future CCRMA releases. Commercial NeXT software developers may freely incorporate and adapt the software to accelerate development of NEXTSTEP software products. (Free commercial use of files copyrighted by NeXT Inc. are restricted to NEXTSTEP platforms.)

People who answered the Music Kit survey sent around last year will notice that many of the most requested items on the survey have been included in the 4.0 release. Please send your future Music Kit requests to musickit@ccrma.stanford.edu. To subscribe to the Music Kit mailing list, send email to ``listproc@ccrma.Stanford.EDU''. The body of the message (not the Subject line) should contain the text ``subscribe mkdist <your name>'' (You don't type the '<' and '>'). To unsubscribe, send an email with ``unsubscribe mkdist'' in the body of the message.

See the Music Kit Release Notes for further details.

The Music Kit was designed by David Jaffe and Julius Smith, with input from James A. Moorer and Roger Dannenberg. The Objective-C portion of the Music Kit was written by David A. Jaffe, while the signal processing and synthesis portion was written by Julius Smith. The Ensemble application and much of the SynthPatch library were written by Michael McNabb. Douglas Fulton had primary responsibility for the documentation. Others who contributed to the project included Dana Massie, Lee Boynton, Greg Kellogg, Douglas Keislar, Michael Minnick, Perry Cook, John Strawn and Rob Poor.

Music Kit References

Highlights of the Music Kit 4.0 Release

Highlights of the Music Kit 4.1 Release

The Music Kit 4.1 release is essentially Release 4.0 plus support for NEXTSTEP 486/Pentium machines. It uses one or more plug-in DSP cards to support music synthesis and digital audio processing. MIDI is similarly provided by plug-in cards. The release is ``fat'' so there is only one package that works on both NeXT and Intel-processor computers.

For music synthesis and digital audio processing on Intel hardware, the 4.1 Music Kit provides drivers for three DSP sound cards, the Ariel PC-56D, the Turtle Beach Multisound and the i*link i56.

For MIDI on Intel hardware, the Music Kit provides a driver for MPU-401 cards (such as the MusicQuest family and the SoundBlaster-16), emulating the functionality of NeXT's MIDI driver, including synch to MIDI time code. Source to all the drivers is included in the Music Kit Source Package.

While only one DSP card is required, the power of a system can be scaled up by the use of multiple cards. An application built with the Music Kit can simultaneously use multiple DSP and MIDI cards by the same or different manufacturers, with details of DSP resource allocation handled automatically. In addition, the drivers provide automatic sensing so that applications can be moved between machines with different hardware configuration with no re-configuration necessary.

NeXT hardware has not been left behind. The Music Kit now supports the 192K DSP extension memory board (available from S.F.S.U.) with automatic sensing.

Other new features include a MusicKit panel for the Preferences application for setting various defaults and managing multiple DSP cards.

See the Music Kit 4.1 Announcement for further details regarding the supported DSP cards.

For further inquiries regarding the Music Kit or DSP tools, send email to musickit@ccrma.stanford.edu. To join the Music Kit email list, send a subscribe message to mkdist-request@ccrma.stanford.edu.

Capella: A Graphical Interface for Algorithmic Composition (May 1996)

Heinrich Taube and Tobias Kunze

Capella is an object-oriented graphical interface for algorithmic composition in Common Music. It defines classes of browsers and worksheets that implement a consistent set of visualization tools and serve as a graphical front end for the system. The interface currently runs on the Macintosh under Macintosh Common Lisp.

Algorithmic composition is a complex activity in which both musical and technological issues must be addressed in parallel. This, in turn, places special requirements on a graphical interface that supports the process. Object-oriented composition environments such as Common Music (Taube 1994), DMix (Oppenheim 1993), and Mode (Pope 1992) place additional demands on graphical tools due to the breadth of representation and functionality that these kind of systems implement. Smalltalk environments are able to take advantage of a powerful windowing system provided by Smalltalk itself. Since Common Music was designed to be as portable as possible, without the aid of a native windowing system, almost no attempt to address visualization issues was made until recently. Until now, visual output in Common Music was completely text-based, similar to the type of display one sees when working, for example, in a Unix shell window. Common Music's command-line driven interpreter, Stella, connects to the system's toolbox similar to the manner in which a shell connects to Unix. Although it allows powerful input expressions to be formulated, Stella does not allow the inner processes to be easily understood. Capella is a response to some of the communication limitations in Stella, while keeping in mind that graphic representation and mouse based gestures are not always the best or most expedient models to choose for interacting with a complex system. Capella has been designed to be a complement, not a replacement, for the two other modes of interaction supported by the system: command processing from Stella and procedure invocation from Lisp. Common Music simply runs all three modes ``in parallel'' and the composer is free to choose whatever is most appropriate to a particular situation.

Capella is still in the early stages of development. Its primary goal is to allow a set of flexible visualization tools to be developed, but it also makes interacting with the system as a whole easier and more transparent. The need for transparency is particularly acute in algorithmic composition workshops, where participants must quickly absorb not just new theoretical concepts, but a specific implementation of them as well.

References

SEE-A Structured Event Editor: Visualizing Compositional Data in Common
Music (January 1998)

Tobias Kunze and Heinrich Taube

Highly structured music composition systems such as Common Music raise the need for data visualization tools which are general and flexible enough to adapt seamlessly to the--at times very unique--criteria composers employ when working with musical data. These criteria typically involve multiple levels of data abstraction and interpretation. A ``passing note'', for instance, is a fairly complex, compound musical predicate, which is based on properties of several other, lower-level musical predicates such as the degree of consonance, metric position, or melodic direction, all of which are of different complexity, draw upon different primitives, and apply only to a limited set of data types, that is, ``notes''. Visualizing compound musical predicates then translates to a mapping of a set of criteria--predicates and properties--on a set of display parameters.

The SEE visualization tool provides graphical and programming interfaces for these two tasks and consists of an abstracting program layer to allow for the construction of custom musical predicates out of a possibly heterogenous set of data and a separate program module which controls their mapping onto a wide variety of display parameters. As large screens and full color support become more and more standard for most computer systems as well as to account for the complexity that comes with visualizing musical predicates in general, the display parameters make consequent use of both, color and the 3D visualization paradigm. Thus, object position as well as extension along the $x$, $y$, and $z$ axes, object representation (model), and color (position of its color along the coordinate axes of the current color model) may be assigned up to ten or more predicates.

Although SEE may be used as a standalone tool, it is highly integrated and primarily intended to be used with Capella, Common Music's graphical user interface. The application framework itself and the programming interfaces are implemented in Common Lisp, and thus run on a variety of platforms.

The current version is being developed on a SGI workstation using the X11 windowing system and the OpenGL and OpenInventor graphics standards, but portability is highly desired and upcoming ports will most probably start out with the Apple Macintosh platform.

Ashes Dance Back, a collaborative work with Jonathan Harvey (February 1999)

Juan Pampin

I collaborated with professor Jonathan Harvey for the sound design of his piece Ashes Dance Back, for choir and electronic sounds. This collaboration was four quarters long, covering fall/winter 95-96, and fall/winter 96-97. At the request of professor Harvey I used my ATS system (see ``Current Research Activites'' section) for spectral modeling to generate the electronic sounds of the piece based on the analysis and transformation of a single vocal sound: a B flat sample of my own singing.

During the composition of this piece, many improvements and additions were done to ATS. Here is a list of the most prominent ones:

The equalization, montage, and final mix of all the electronic materials was done using CLM. For the performance of the electronic sounds of the piece we used the following strategy: long sequences (most of them backgrounds) were stored on CD and triggered by the sound engineer in the concert. Medium to short materials (1 to 20 second long) where transferred to two Emu E64 samplers and interpreted by a keyboard player during performance. Ashes Dance Back was premiered at the Strasbourg Musica Festival on September 27, 1997.

Computer-based implementation of Karlheinz Stockhausen's piece Mantra (February 1999)

Juan Pampin

Karlheinz Stockhausen's piece Mantra (1970), for two pianos and live-electronics, marked an important point in terms of real-time electronic music. Stockhausen's piece presents a whole network of interactions both in terms of instrumental actions and sound processing. The performers are required to control not only the intricate interplay between the two instruments but also to control the way the sound of their pianos is transformed by means of ring modulation. A noticeable gap in terms of ``musical'' interpretation arises here; while the players can control to a great extent the piano gestures carefully notated by the composer on the score; adjusting the parameters of ring modulation using a ``dial'' provided with the original analog equipment (designed by Stockhausen back in the 1970) becomes really awkward and complicated for them. The motivation for this project was to create a new interface for the dynamic control of the ring-modulators aiming both to keep the expression of the original setup, that obviously represents an important part of the piece (i.e. the ``continuous'' character of the dial, the grid of fixed frequencies/pitches, etc.), and to create an homogeneous interface for the pianists. The project was achieved in four stages:

  1. Interface research In this stage the goal was to decide which interface was the most appropriate for the piece following the requests of a professional pianist (Tom Schultz). General questions of ergonomics were considered, specially regarding the use of keyboard interfaces and wheel controllers (as those available on commercial synthesizers).

  2. Implementation of the live-electronics on the computer. In this stage the original analog sound processing modules were modeled in the computer using the CLM programming language. Some new capabilities were incorporated to the original model such as dc-blocking filters and low level controls.

  3. Interface design. Based on the results of stage 1 the interface chosen for the frequency control of the ring-modulators was a MIDI keyboard synthesizer: Yamaha Sy77. This synthesizer allows for a multidimensional control of parameters through its keyboard and controllers, that can be easily mapped to the computer via MIDI. In the computer the controllers are scaled into the proper ranges by software, some of them used for coarse frequency changes (i.e. modulation wheel) and others for fine micro-tonal adjustments (i.e. dial). The keyboard note-on information is translated into tempered frequency values and velocity is mapped to portamento timing between frequencies, introducing an expressive dimension to the modulation changes.

  4. Final software prototype design. The final prototype was implemented on an SGI computer running Common Lisp Music under Allegro Common Lisp 4.3. The program integrates a MIDI processing module (issued from stage 3) and a sound processing module that performs filtering and ring modulation (issued from stage 2) in two parallel channels (one for each piano). All controllers available on the Sy77 are accessible from the computer and can be mapped to any control parameter of the algorithms, allowing for a flexible design of the interface that can be different for each pianist. Actually, during the rehearsals of the piece (performed by Tom Schultz and Joan Nagano at Stanford in 1998) we had to adjust controller ranges and even change controllers on the fly by request of the players, trying to adjust the interfaces ergonomically at their request much as we could (for instance, Ms. Nagano's arm was too short to reach the modulation wheel of the synthesizer in time for during an intricate passage, after trying different solutions we set up things to have she playing on one of the frontal sliders closer to the piano keyboard.)

Conclusions: This computer implementation of Mantra not only opens the door for more performances of the piece without depending on its original analog gear (there are just a few working analog units that can be rented from the composer's editor), but it also allows for a new musical interpretation of the piece. The sound processing parameters are controlled from an homogeneous user interface that allows the pianists to "play" the modulation frequencies as notes on a keyboard and use wheels and sliders for coarse and fine tuning. Taking advantage of the digital implementation of the sound processing modules new features such as the dc-blocking filters were incorporated helping for better sonic results. Using the MIDI protocol new expression subtleties were introduced, expanding further more the musical interaction of the piece and integrating sound processing controls with the piano gestures.

SynthBuilder--A Graphical SynthPatch Development Environment (May 1996)

Nick Porcaro and Pat Scandalis

SynthBuilder is a user-extensible, object-oriented, NEXTSTEP Music Kit application for interactive real-time design of synthesizer patches. Patches are represented by networks of digital signal processing elements called ``unit generators'' and MIDI event elements called ``note filters'' and ``note generators''. SynthBuilder is based on Eric Jordan's GraSP application, created at Princeton University in 1992, and the NeXT Draw example. The graphical interface enables construction of complex patches without having to write a single line of code, and the underlying Music Kit software provides support for real-time DSP synthesis and MIDI. This means there is no ``compute, then listen'' cycle to slow down the process of developing a patch. It can be tried out immediately on a MIDI keyboard, and unit generator and note filter parameters can be adjusted in real time while a note is still sounding. Sixteen bit stereo sound is synthesized immediately on one or more DSP56001 signal processing chips, and can be controlled from the user interface with software-simulated or physical MIDI devices.

In addition to synthesis, the system supports configurations for sound processing via the DSP serial port which is also used for sound output to DACs and other digital I/O devices. MIDI messages can be mapped to unit generator object control methods, permitting high-level control of patch parameters. For example, a MIDI key number can be readily mapped into frequency, and then mapped into a delay line length via a graphically constructed lookup table. A single MIDI event can be fed to (or through) multiple note filters, each of which can modify the event stream and/or control one or more unit generator parameters. Polyphony is handled in SynthBuilder by graphically specifying a voice allocation scheme. Optionally, a Music Kit SynthPatch can be generated (in high-level source-code form) and used in another application. Dynamically loadable custom ``inspectors'' (user interfaces) can be created for patch elements. Dynamic loading promotes easy distribution and sharing of inspector modules, and promotes a fast, efficient development cycle. The process of creating a custom inspector is facilitated by a default-inspector-generator which takes a DSP assembly macro and a signal-flow/parameter list specification as input, and creates working interface code which can then be customized.

As of this writing, SynthBuilder has more than 50 graphical custom inspectors, including an envelope editor, digital filter response curves, and a MIDI lookup table. SynthBuilder is currently being used by researchers at CCRMA to explore new synthesis techniques. SynthBuilder is now in the alpha release stage on both NeXT and Intel Pentium systems. Supported DSP cards for Intel systems include the Ariel PC56D, the Turtle Beach Multisound or Monterrey, and the i*link i56.

Franken Hardware: On Scalability for Real-Time Software Synthesis and Audio Processing (May 1996)

Bill Putnam and Timothy Stilson

The continuing rise in processor speed in today's computers makes software synthesis ever more viable, even for real-time applications. Music, however, tends to contain high levels of polyphony and complexity, and can easily surpass the processing ability of any single processor to keep up with real time. This problem is expected to exist for at least a few more generations of processors, simply because of the sheer complexity of current projects. Therefore some sort of parallel processing is necessary.

This project started with the design and construction of the Frankenstein Box Multiple-DSP Processing Engine for MusicKit. The Frankenstein Box consists of 8 EVM56002 evaluation modules (chosen because of inexpensiveness and compatibility with the current MusicKit architecture), along with glue logic and sound hardware.

The project continues with the specification of the Franken-II system, which places all 8 56002 chips on a single PCI card and improves the inter-DSP communication and audio routing.

As the MusicKit and other real-time software synthesis systems at CCRMA move to using general-purpose microprocessors for calculation, this project will try to address concerns relating to the ability to easily scale the systems beyond single processors. Primary considerations are: (1) the portability of code between main processors and peripheral processors which, because of economics and other factors, are often different types of processors than the main processor; (2) the ability to communicate easily between processors and to move processing tasks between processors in as transparent a manner as possible; and (3) the ease of further scaling to any level. These considerations effect the design of the system at many levels, from the design of the add-on processor systems up to the architecture of the software synthesis system itself.

Graphical Additive Synthesis (February 1999)

Craig Stuart Sapp

A command-line program, line2sine, was written to interpret graphic lines in a CAD-like drawing program as sinwaves. Documents created by the NEXTSTEP program Diagram.app are read by the line2sine program and any lines in that document are converted into freqency and amplitude envelopes which are then fed into oscillator unit-generators. The line2sine program can be downloaded from ftp://ftp.peanuts.org/NEXTSTEP/audio/programs/line2sine.1.0.NI.bs.tar.gz or http://www.peak.org/next/apps/LighthouseDesign/Diagram/line2sine.1.0.NI.bs.tar.gz. These two files contain the program, documentation, and examples. On-line documentation as well as example conversions between graphics and sound can be found at http://hummer.stanford.edu/sig/doc/examples/line2sine.

Rapid Prototyping for DSP, Sound Synthesis, and Effects (May 1996)

Julius Smith

The nature of computer support for digital signal processing (DSP) research is an ongoing issue. Initially, the Fortran programming language was the standard ``high level'' representation, and hardware and horizontal microcode served as ``low level'' representations for mass-produced products. While special purpose hardware (e.g., ASICs) and DSP microcode continue to thrive, still giving the lowest asymptotic cost in mass production, the higher level tools have changed considerably: Fortran is all but obsolete in favor of C, and C is rapidly giving way to its object-oriented extension, C++. For faster research prototyping at the expense of slower execution, interactive programming environments such as MatLab are being used in place of classical software development. These programming environments offer extensive display capabilities and a high-level, interpreted language with easy to use syntactic support of common signal processing operations, both in the time and frequency domains. At a still higher level of abstraction, development tools supporting the direct manipulation of block diagrams are becoming more common. Examples include SimuLink (MatLab), LabView (National Instruments), Ptolemy and Gabriel (Berkeley), Max and TurboSynth (Opcode), SynthKit (Korg R& D), ComDisco, Star, and other CAD tools related to signal processing.

In a well designed rapid prototyping system, it is possible to work at all levels in a variety of alternative representations such as block diagrams, matlab, object-oriented C, or assembly language.

In typical music synthesis and audio signal processing applications, it is not necessary to sacrifice more than a few percent of theoretical maximum DSP performance, in terms of both speed and code size, in return for the use of a high-level, block-diagram oriented development tool. This is because a small number of primitive modules can implement the vast majority of existing synthesis and processing techniques, and they account for the vast majority of the computational expense. These modules can be fully optimized in advance so that simple, drag-and-drop programming can provide both a real-time simulation and well structured code generation which are very close to the efficiency of a special-purpose, hand-coded, DSP assembly language program. As a result, block-diagram based programming tools are fundamental to good signal processing support in music synthesis and digital audio development systems.

For rapid research prototyping in music and audio applications, there remains an unfulfilled need for a full-featured, available, open, and well structured development system supporting MIDI and digital audio synthesis and signal processing. CCRMA is presently supporting in part the development of SynthBuilder, a block-diagram based rapid prototyping tool for these purposes. SynthBuilder leverages very heavily off of the advanced capabilities of the Music Kit and NEXTSTEP.

References

Tactile Manipulation of Software (January 1998)

Sean Varah

Extending existing software at CCRMA to incorporate tactile manipulation of software. My work at the Harvard Computer Music Center involved adapting computer music software to emulate analog studio techniques. I plan to adapt digital signal processing programs to accept MIDI or other external controller information to change program parameters. For example, an on-screen digital filtering program would have its frequencies, bandwidth, and attenuation set by MIDI sliders, so a composer could manipulate parameters in a tactile way, emulating analog graphic equalizers. By setting up external controllers, the composer would then be able to manipulate several parameters at once, as opposed to typing single parameters, or adjusting one parameter at a time with the mouse. I then plan to use this type of interactive control in live performance.


next up previous contents
Next: Physical Modeling and Digital Signal Processing (Past) Up: Past Research Activities Previous: Past Research Activities
CCRMA CCRMA Overview
©2000 CCRMA, Stanford University. All Rights Reserved.