Searching for the GRAIL (Winter 2016)
Winter 2016 issue of the Computer Music Journal (CMJ, Winter 2016)
This article describes a quest for the GRAIL (Giant Radial Array for Immersive Listening), a large-scale loudspeaker system with related hardware and software control equipment. The GRAIL was developed at the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University, evolving from the need for optimal sound quality in our multichannel concerts. It is also used for teaching and research.
The current GRAIL is one step in an ongoing evolutionary process, characterized by the use of off-the-shelf hardware components and custom software–based on free software languages and libraries. While developing our software, we have, as much as possible, aimed to take advantage of existing programs and utilities.
2016 AES International Conference on Sound Field Control (AES SFC 2016), Guildford, England
This paper describes the *SpHEAR (Spherical Harmonics Ear) project, an evolving family of 3D printed, GNU Public License/Community Commons licensed soundfield microphone designs. The microphone assembly is 3D printed as separate parts, one for each capsule holder and a microphone mount. The capsule holders interlock together like a 3D puzzle and create the microphone assembly. This strategy was chosen to have parts that can be printed flat and without overhangs so they can be printed in low to medium price 3D printers that use fused-filament fabrication technology. The 3D models currently include the TinySpHEAR, a four-capsule tetrahedral microphone, the Octathingy, an eight-capsule design, and the BigSpHEAR 12- and 20-capsule proof-of-concept platonic-solid models. The models are written in OpenSCAD and are completely parametric. The project also includes suggested designs for the capsule interface electronics and preliminary calibration software written in Octave.
[talk] High Order Ambisonics, generating and diffusing full surround 3D soundfields, October 2015
Slides for the lecture at the Electronic Music Week 2015, Shanghai, China, October 21 2015
An Update on the Development of OpenMixer (April 2015)
Elliot Kermit-Canfield, Fernando Lopez-Lezcano
Linux Audio Conference 2015 (LAC2015), Johannes Gutenberg University (JGU) Mainz, Germany
This paper serves to update the community on the development of OpenMixer, an open source, multichannel routing platform designed for CCRMA. Serving as the replacement for a digital mixer, OpenMixer provides routing control, Ambisonics decoders, digital room correction, and level metering to the CCRMA Listening Room's 22.4 channel, full sphere speaker array.
An Architecture for Reverberation in High Order Ambisonics (October 2014)
137th International AES Convention (137th AES), Los Angeles, USA
This paper describes a reverberation architecture implemented within the signal chain of a periphonic HOA (High Order Ambisonics) audio stream. A HOA signal (3rd order in the example implementations) representing the dry source signal is decoded into an array of virtual sources uniformly distributed within the reverberant space being simulated. These virtual sources are convolved with independent, decorrelated impulse responses, optionally tailored to model spatial variations of the simulated reverberation. The output of each convolver is then encoded back into HOA and mixed with the original Ambisonics dry signal. The result is a convolution reverberation engine with a HOA input that outputs HOA and maintains the spatial characteristics of the input signal.
Towards Open 3D Sound Diffusion Systems (September 2014)
ICMC / SMC 2014 (ICMC/SMC2014), Athens, Greece
This paper describes the rationale, design and implementation of general purpose sound diffusion systems running on commodity PC hardware, and using open source free software components with minimal additional programming. These systems are highly configurable and powerful, can be extended as needed, and are a good fit for research environments as they can adapt to changing needs and experimentation in new technologies. This paper describes two examples: the system we have been using and extending for the past few years for sound diffusion in concert, and the system running permanently in our Listening Room at CCRMA, Stanford University.
[talk] In Situ Evaluation of Surround Sound System Performance, ASA Conference, December 2013
Eric Benjamin (CMAP), Aaron Heller (SRI), Fernando Lopez-Lezcano, J Acoust Soc Am. 2013 Nov;134(5):4185. doi: 10.1121/1.4831347
Surround sound systems are produced with the intention of reproducing the spatial aspects of sound, such as localization and envelopment. As part of his work on Ambisonics, Gerzon developed two metrics, the velocity and energy localization vectors, which are intended to predict the localization performance of a system. These are used during the design process to optimize the decoder that supplies signals to the loudspeaker array. At best, subjective listening tests are conducted on the finished system, but no objective assessments of the spatial qualities are made to verify that the realized performance correlates the predictions. In the present work, binaural recordings were made of a 3-D 24-loudspeaker installation at Stanford's Bing Studio. Test signals were used to acquire the binaural impulse response of each loudspeaker in the array and of Ambisonic reproduction using the loudspeaker array. The measurements were repeated at several locations within the hall. Subsequent analysis calculated the ITDs and ILDs for all cases. Initial results from the analysis of the ITDs and ILDs for the center listening position show ITDs, which correspond very closely to what is expected in natural hearing, and ILDs, which are similar to natural hearing.
[talk] Towards Open 3D Multichannel Sound Diffusion Systems, October 2013
The pdf with the slides above represents a much longer planned talk, the time available with sequential interpretation to chinese was much shorter than scheduled so a lot of the details and slides were skipped in the actual talk
Fernando Lopez-Lezcano, Travis Skare, Michael J. Wilson, Jonathan S. Abel
Linux Audio Conference 2013 (LAC2013), IEM, Graz, Austria
A Linux-based system for live auralization is described, and its use in recreating the reverberant acoustics of Hagia Sophia, Istanbul, for a Byzantine chant concert in the recently inaugurated Bing Concert Hall is detailed. The system employed 24 QSC full range loudspeakers and six subwoofers strategically placed about the hall, and used Countryman B2D hypercardioid microphones affixed to the singers' heads to provide dry, individual vocal signals. The vocals were processed by a custom-built Linux-based computer running Ardour2, jconvolver, jack, jack-mamba, SuperCollider and Ambisonics plugins and decoders among other free software to generate loudspeaker signals that, when imprinted with the acoustics of Bing, provided the wet portion of the Hagia Sophia simulation.
From Jack to UDP packets to sound, and back (April 2012)
Linux Audio Conference 2012 (LAC2012), CCRMA, Stanford University, USA
The Mamba Digital Snakes are commercial products created by Network Sound, that are used in pairs to replace costly analog cable snakes by a single Ethernet cable. A pair of boxes can send and receive up to 64 channels at 48KHz sampling rate packed with 24 bit samples. This paper describes the evolution of jack-mamba, a small jack client that can send and receive UDP packets to/from the box through a network interface and transforms it into a high channel count soundcard.
Article: "A Very Fractal Cat: of Cats, performers, composers and programmers", an article published in eContact! 13.2
A description of what makes the Cat pieces tick (or is it miaou?)...
Sound and Music Conference (SMC2010), Barcelona, Spain
In this paper I describe the genesis and evolution of a series of live pieces for a classically trained pianist, keyboard controller and computer that include sound generation and processing, event processing and algorithmic control and generation of low and high level structures of the performance. The pieces are based on live and sampled piano sounds, further processed with granular and spectral techniques and merged with simple additive synthesis. Spatial processing is performed using third order Ambisonics encoding and decoding.
Fernando Lopez-Lezcano, Jason Sadural
Linux Audio Conference (LAC2010), Utrech, The Netherlands
The Listening Room at CCRMA, Stanford University is a 3D studio with 16 speakers (4 hang from the ceiling, 8 surround the listening area at ear level and 4 more are below an acoustically transparent grid floor). We found that a standard commercial digital mixer was not the best interface for using the studio. Digital mixers are complex, have an opaque interface and they are usually geared towards mixdown to stereo instead of efficiently routing many input and output channels. We have replaced the mixer with a dedicated computer running Openmixer, an open source custom program designed to mix and route many input channels into the multichannel speaker array available in the Listening Room. This paper will describe Openmixer, its motivations, current status and future planned development.
Linux Audio Conference (LAC2009), Parma, Italy
When The Knoll, the building that houses CCRMA, was completely renovated in 2004, new custom workstations were designed and built with the goal of being both fast machines and completely noiseless to match the architectural and acoustical design of the building.
Surf to the LAC2009 page for the paper presentation.
Sound and Music Computing Conference 2008 (SMC2008), Berlin, Germany
"Dlocsig is a dynamic spatial locator unit generator written for the Common Lisp Music (CLM) sound synthesis and processing language. Dlocsig was first created in 1992 as a four channel 2d dynamic locator and since then it has evolved to a full 3d system for an arbitrary number of speakers that can render moving sound objects through amplitude panning (VBAP) or Ambisonics. This paper describes the motivations for the project, its evolution over time and the details of its software implementation and user interface".
CCRMA Studio Report, 2007
Fernando Lopez-Lezcano, Carr Wilkerson
International Computer Music Conference 2007 (ICMC2007), Copenhagen, Denmark
[talk] Planet CCRMA, an Environment for Open Source Computing in Music Research and Production, 2006
Keynote talk at the International Workshop on Computer Music and Audio Technology (WOCMAT), Taipei, Taiwan, March 2006
International Computer Music Conference 2005 (ICMC2005), Barcelona, Spain
Planet CCRMA at Home is a collection of packages that you can add to a computer running RedHat 9 or Fedora Core 1, 2 or 3 to transform it into an audio workstation with a lowlatency kernel, current ALSA audio drivers and a nice set of music, midi, audio and video applications. This presentation will outline the changes that have happened in the Planet over the past two years (since the previous presentation at LAC2003).
[talk] Soft Landing on Planet CCRMA, October 2003
[talk] Planet CCRMA, July 2003
Talk at the BYOL, Bring Your Own Laptop Free Software for Music Workshop, July 4 2003, Prato, Italy
International Computer Music Conference 2002 (ICMC2002), Göteborg, Sweden
In this paper I would like to present an overview of Planet CCRMA, a freely available collection of Open Source sound and music packages that reflect the linux software environment I currently maintain at CCRMA. It is build on top of RedHat 7.2 and is easy to install and upgrade. Land on Planet CCRMA at http://ccrma.stanford.edu/planetccrma/software/.
CCRMA Studio Report, 2001
International Computer Music Conference 2001 (ICMC2001), La Habana, Cuba
CCRMA Studio Report, 1999
International Computer Music Conference 1999 (ICMC1999), Beijing, China
CCRMA Studio Report, 1998
International Computer Music Conference 1998 (ICMC1998), University of Michigan, Ann Arbor, USA
CCRMA Studio Report, 1997
International Computer Music Conference 1997 (ICMC1997), Thessaloniki, Greece
International Computer Music Conference 1996 (ICMC1996), Hong Kong University of Science and Technology, China
This paper will describe the current implementation of PadMaster, a real-time improvisation environment running under the NextStep operating system on both NeXT hardware and Intel PCs. The system was designed with the Mathews/Boie Radio Drum in mind, but can now use alternative controllers, including widely available graphics tablets. The current version adds soundfile playback and algorithms to the preexisting palette of performance options.
This paper was also presented at the SBCMIII Conference (Third Brazilian Symposium in Computer Music), Recife, Brazil (pdf)
International Computer Music Conference 1995 (ICMC1995), Banff Centre for the Arts, Canada
This paper will describe the design and implementation of PadMaster, a real-time improvisation environment running under the NextStep operating system. The system currently uses the Mathews/Boie Radio Drum as a three dimensional controller for interaction with the performer.
A more detailed version of the same paper was presented at the SBCMII Conference (Second Brazilian Symposium in Computer Music) in Canela, Brazil
International Computer Music Conference 1994 (ICMC1994), DIEM, Danish Institute of Electroacoustic Music, Denmark
CCRMA Studio Report, 1994
International Computer Music Conference 1994 (ICMC1994), DIEM, Danish Institute of Electroacoustic Music, Denmark