Aaron Heller, Eric Benjamin, Fernando Lopez-Lezcano
150th AES Convention, May 25-28, 2021, (150th AES Convention, Global Resonance)
In this paper we discuss the motivation, design, and analysis of ambisonic decoders for systems where the vertical order is less than the horizontal order, known as mixed-order Ambisonic systems. This can be due to the use of microphone arrays that emphasize horizontal spatial resolution or speaker arrays that provide sparser coverage vertically. First, we review Ambisonic reproduction criteria, as defined by Gerzon, and summarize recent results on the relative perceptual importance of the various criteria. Then we show that using full-order decoders with mixed-order program material results in poorer performance than with a properly designed mixed-order decoder. We then introduce a new implementation of a decoder optimizer that draws upon techniques from machine learning for quick and robust convergence, discuss the construction of the objective function, and apply it to the problem of designing two-band decoders for mixed-order signal sets and non-uniform loudspeaker layouts. Results of informal listening tests are summarized and future directions discussed.
Fernando Lopez-Lezcano
EAA Spatial Audio Signal Processing Symposium, (EAA 2019), Paris, France
This paper presents an update of the *SpHEAR (Spherical Harmonics Ear) project, created with the goal of using low cost 3D printers to fabricate Ambisonics microphones. The project includes all mechanical 3d models and electrical designs, as well as all the procedures and software needed to calibrate the microphones. Everything is shared through GPL/CC licenses and is available in a public GIT repository. We will focus on the status of the eight-capsule OctaSpHEAR 2nd order microphone, with details of the evolution of its mechanical design and calibration.
Fernando Lopez-Lezcano, Christopher Jette
Proceedings of the 17th Linux Audio Conference (LAC2019), CCRMA, Stanford University
The Stage, a small concert hall at CCRMA, Stanford University, was designed as a multi-purpose space when The Knoll, the build- ing that houses CCRMA, was renovated in 2003/5. It is used for concerts, installations, classes and lectures, and as such it needs to be always available and accessible. Its support for sound diffusion evolved from an original array of 8 speakers in 2005, to 16 speak- ers in a 3D configuration in 2011, with several changes in speaker placement over the years that optimized the ability to diffuse pieces in full 3D surround. This paper describes the evolution of the design and a significant upgrade in 2017 that made it capable of rendering HOA (High Order Ambisonics) of up to 5th or 6th order, without changing the ease of operation of the existing design for classes and lectures, and making it easy for composers and concert presenters to work with both the HOA and legacy 16 channel systems.
Fernando Lopez-Lezcano
2018 AES Conference on Audio for Virtual and Augmented Reality (AES AVAR 2018), Redmond, Washington, USA
This paper in an update of the *SpHEAR (Spherical Harmonics Ear) project, created with the goal of using low cost 3D printers to fabricate Ambisonics microphones. The initial four-capsule prototypes reported in 2016 have evolved into a family of full-featured high quality microphones that include the traditional tetrahedral design and a more advanced eight capsule microphone that can capture second-order soundfields. The project includes all mechanical 3d models and electrical designs, as well as all the procedures and software needed to calibrate the microphones for best performance. A fully-automated robotic arm measurement rig is also described. Everything in the project is shared through GPL/CC licenses, uses Free Software components, and is available on a public GIT repository (https://cm-gitlab.stanford.edu/ambisonics/SpHEAR/).
The SpHEAR Project GIT repository
Fernando Lopez-Lezcano
Winter 2016 issue of the Computer Music Journal (CMJ, Winter 2016)
This article describes a quest for the GRAIL (Giant Radial Array for Immersive Listening), a large-scale loudspeaker system with related hardware and software control equipment. The GRAIL was developed at the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University, evolving from the need for optimal sound quality in our multichannel concerts. It is also used for teaching and research.
The current GRAIL is one step in an ongoing evolutionary process, characterized by the use of off-the-shelf hardware components and custom software–based on free software languages and libraries. While developing our software, we have, as much as possible, aimed to take advantage of existing programs and utilities.
Fernando Lopez-Lezcano
2016 AES International Conference on Sound Field Control (AES SFC 2016), Guildford, England
This paper describes the *SpHEAR (Spherical Harmonics Ear) project, an evolving family of 3D printed, GNU Public License/Community Commons licensed soundfield microphone designs. The microphone assembly is 3D printed as separate parts, one for each capsule holder and a microphone mount. The capsule holders interlock together like a 3D puzzle and create the microphone assembly. This strategy was chosen to have parts that can be printed flat and without overhangs so they can be printed in low to medium price 3D printers that use fused-filament fabrication technology. The 3D models currently include the TinySpHEAR, a four-capsule tetrahedral microphone, the Octathingy, an eight-capsule design, and the BigSpHEAR 12- and 20-capsule proof-of-concept platonic-solid models. The models are written in OpenSCAD and are completely parametric. The project also includes suggested designs for the capsule interface electronics and preliminary calibration software written in Octave.
The SpHEAR Project GIT repository
Slides for the lecture at the Electronic Music Week 2015, Shanghai, China, October 21 2015
Elliot Kermit-Canfield, Fernando Lopez-Lezcano
Linux Audio Conference 2015 (LAC2015), Johannes Gutenberg University (JGU) Mainz, Germany
This paper serves to update the community on the development of OpenMixer, an open source, multichannel routing platform designed for CCRMA. Serving as the replacement for a digital mixer, OpenMixer provides routing control, Ambisonics decoders, digital room correction, and level metering to the CCRMA Listening Room's 22.4 channel, full sphere speaker array.
Fernando Lopez-Lezcano
137th International AES Convention (137th AES), Los Angeles, USA
This paper describes a reverberation architecture implemented within the signal chain of a periphonic HOA (High Order Ambisonics) audio stream. A HOA signal (3rd order in the example implementations) representing the dry source signal is decoded into an array of virtual sources uniformly distributed within the reverberant space being simulated. These virtual sources are convolved with independent, decorrelated impulse responses, optionally tailored to model spatial variations of the simulated reverberation. The output of each convolver is then encoded back into HOA and mixed with the original Ambisonics dry signal. The result is a convolution reverberation engine with a HOA input that outputs HOA and maintains the spatial characteristics of the input signal.
Fernando Lopez-Lezcano
ICMC / SMC 2014 (ICMC/SMC2014), Athens, Greece
This paper describes the rationale, design and implementation of general purpose sound diffusion systems running on commodity PC hardware, and using open source free software components with minimal additional programming. These systems are highly configurable and powerful, can be extended as needed, and are a good fit for research environments as they can adapt to changing needs and experimentation in new technologies. This paper describes two examples: the system we have been using and extending for the past few years for sound diffusion in concert, and the system running permanently in our Listening Room at CCRMA, Stanford University.
Eric Benjamin (CMAP), Aaron Heller (SRI), Fernando Lopez-Lezcano, J Acoust Soc Am. 2013 Nov;134(5):4185. doi: 10.1121/1.4831347
Surround sound systems are produced with the intention of reproducing the spatial aspects of sound, such as localization and envelopment. As part of his work on Ambisonics, Gerzon developed two metrics, the velocity and energy localization vectors, which are intended to predict the localization performance of a system. These are used during the design process to optimize the decoder that supplies signals to the loudspeaker array. At best, subjective listening tests are conducted on the finished system, but no objective assessments of the spatial qualities are made to verify that the realized performance correlates the predictions. In the present work, binaural recordings were made of a 3-D 24-loudspeaker installation at Stanford's Bing Studio. Test signals were used to acquire the binaural impulse response of each loudspeaker in the array and of Ambisonic reproduction using the loudspeaker array. The measurements were repeated at several locations within the hall. Subsequent analysis calculated the ITDs and ILDs for all cases. Initial results from the analysis of the ITDs and ILDs for the center listening position show ITDs, which correspond very closely to what is expected in natural hearing, and ILDs, which are similar to natural hearing.
Talk at the Electronic Music Week 2013, International Forum“Soundscapes – 3D Sound” Shangai, China, October 22 2013
The pdf with the slides above represents a much longer planned talk, the time available with sequential interpretation to chinese was much shorter than scheduled so a lot of the details and slides were skipped in the actual talk
Fernando Lopez-Lezcano, Travis Skare, Michael J. Wilson, Jonathan S. Abel
Linux Audio Conference 2013 (LAC2013), IEM, Graz, Austria
A Linux-based system for live auralization is described, and its use in recreating the reverberant acoustics of Hagia Sophia, Istanbul, for a Byzantine chant concert in the recently inaugurated Bing Concert Hall is detailed. The system employed 24 QSC full range loudspeakers and six subwoofers strategically placed about the hall, and used Countryman B2D hypercardioid microphones affixed to the singers' heads to provide dry, individual vocal signals. The vocals were processed by a custom-built Linux-based computer running Ardour2, jconvolver, jack, jack-mamba, SuperCollider and Ambisonics plugins and decoders among other free software to generate loudspeaker signals that, when imprinted with the acoustics of Bing, provided the wet portion of the Hagia Sophia simulation.
Fernando Lopez-Lezcano
Linux Audio Conference 2012 (LAC2012), CCRMA, Stanford University, USA
The Mamba Digital Snakes are commercial products created by Network Sound, that are used in pairs to replace costly analog cable snakes by a single Ethernet cable. A pair of boxes can send and receive up to 64 channels at 48KHz sampling rate packed with 24 bit samples. This paper describes the evolution of jack-mamba, a small jack client that can send and receive UDP packets to/from the box through a network interface and transforms it into a high channel count soundcard.
A description of what makes the Cat pieces tick (or is it miaou?)...
Fernando Lopez-Lezcano
Sound and Music Conference (SMC2010), Barcelona, Spain
In this paper I describe the genesis and evolution of a series of live pieces for a classically trained pianist, keyboard controller and computer that include sound generation and processing, event processing and algorithmic control and generation of low and high level structures of the performance. The pieces are based on live and sampled piano sounds, further processed with granular and spectral techniques and merged with simple additive synthesis. Spatial processing is performed using third order Ambisonics encoding and decoding.
Fernando Lopez-Lezcano, Jason Sadural
Linux Audio Conference (LAC2010), Utrech, The Netherlands
The Listening Room at CCRMA, Stanford University is a 3D studio with 16 speakers (4 hang from the ceiling, 8 surround the listening area at ear level and 4 more are below an acoustically transparent grid floor). We found that a standard commercial digital mixer was not the best interface for using the studio. Digital mixers are complex, have an opaque interface and they are usually geared towards mixdown to stereo instead of efficiently routing many input and output channels. We have replaced the mixer with a dedicated computer running Openmixer, an open source custom program designed to mix and route many input channels into the multichannel speaker array available in the Listening Room. This paper will describe Openmixer, its motivations, current status and future planned development.
Linux Audio Conference (LAC2009), Parma, Italy
When The Knoll, the building that houses CCRMA, was completely renovated in 2004, new custom workstations were designed and built with the goal of being both fast machines and completely noiseless to match the architectural and acoustical design of the building.
Surf to the LAC2009 page for the paper presentation.
Sound and Music Computing Conference 2008 (SMC2008), Berlin, Germany
"Dlocsig is a dynamic spatial locator unit generator written for the Common Lisp Music (CLM) sound synthesis and processing language. Dlocsig was first created in 1992 as a four channel 2d dynamic locator and since then it has evolved to a full 3d system for an arbitrary number of speakers that can render moving sound objects through amplitude panning (VBAP) or Ambisonics. This paper describes the motivations for the project, its evolution over time and the details of its software implementation and user interface".
Fernando Lopez-Lezcano, Carr Wilkerson
International Computer Music Conference 2007 (ICMC2007), Copenhagen, Denmark
Keynote talk at the International Workshop on Computer Music and Audio Technology (WOCMAT), Taipei, Taiwan, March 2006
International Computer Music Conference 2005 (ICMC2005), Barcelona, Spain
Planet CCRMA at Home is a collection of packages that you can add to a computer running RedHat 9 or Fedora Core 1, 2 or 3 to transform it into an audio workstation with a lowlatency kernel, current ALSA audio drivers and a nice set of music, midi, audio and video applications. This presentation will outline the changes that have happened in the Planet over the past two years (since the previous presentation at LAC2003).
Talk and workshop at the Free Software for Music workshops within the Resonances 2003 Festival, Paris, France
Talk at the BYOL, Bring Your Own Laptop Free Software for Music Workshop, July 4 2003, Prato, Italy
International Computer Music Conference 2002 (ICMC2002), Göteborg, Sweden
In this paper I would like to present an overview of Planet CCRMA, a freely available collection of Open Source sound and music packages that reflect the linux software environment I currently maintain at CCRMA. It is build on top of RedHat 7.2 and is easy to install and upgrade. Land on Planet CCRMA at http://ccrma.stanford.edu/planetccrma/software/.
International Computer Music Conference 2001 (ICMC2001), La Habana, Cuba
International Computer Music Conference 1999 (ICMC1999), Beijing, China
International Computer Music Conference 1998 (ICMC1998), University of Michigan, Ann Arbor, USA
International Computer Music Conference 1997 (ICMC1997), Thessaloniki, Greece
International Computer Music Conference 1996 (ICMC1996), Hong Kong University of Science and Technology, China
This paper will describe the current implementation of PadMaster, a real-time improvisation environment running under the NextStep operating system on both NeXT hardware and Intel PCs. The system was designed with the Mathews/Boie Radio Drum in mind, but can now use alternative controllers, including widely available graphics tablets. The current version adds soundfile playback and algorithms to the preexisting palette of performance options.
This paper was also presented at the SBCMIII Conference (Third Brazilian Symposium in Computer Music), Recife, Brazil (pdf)
International Computer Music Conference 1995 (ICMC1995), Banff Centre for the Arts, Canada
This paper will describe the design and implementation of PadMaster, a real-time improvisation environment running under the NextStep operating system. The system currently uses the Mathews/Boie Radio Drum as a three dimensional controller for interaction with the performer.
A more detailed version of the same paper was presented at the SBCMII Conference (Second Brazilian Symposium in Computer Music) in Canela, Brazil
International Computer Music Conference 1994 (ICMC1994), DIEM, Danish Institute of Electroacoustic Music, Denmark
International Computer Music Conference 1994 (ICMC1994), DIEM, Danish Institute of Electroacoustic Music, Denmark
Composer, performer, lecturer and computer systems administrator at CCRMA, Stanford University
(C)1993-2023 Fernando Lopez-Lezcano. All Rights Reserved.