Bing Star | The Spatialized Audissey

— Concert 2 —


SAT APR 20, 7:30PM PDT

CCRMA continues its 50th Anniversary Concerts with two new programs of works by students, faculty, staff, and alumni in the Bing Concert Hall Studio.

If possible, please wear a pair of headphones/earphones as we are streaming in binaural.



PROGRAM

(click on titles and artists to see program notes and bios below)


Fernando Lopez-Lezcano: Three Dreams (1993)
          Paper Castles
          Invisible Clouds
          Electric Eyes

Soohyun Kim: Cyberpunk Sanjo (산조) (2023)

Josh Mitchell: PCR (2017-2024)

Andrew Schloss: Towers of Hanoi (1980)

Kimia Koochakzadeh-Yazdi: Alloy Resonator (2024)

Engin Dağlik: a light stung the darkness (2024)

Barbara Nerness: Under the Surface (2021/24)

Matt Wright: Taqsim / The Humanity of Arabs (2024)



PROGRAM NOTES

Three Dreams | Fernando Lopez-Lezcano

This piece is about impossible dreams. We often build beautiful Paper Castles on Invisible Clouds, thinking that dreams are reality, or that they can be turned into reality by sheer will power or the wave of a magic wand. These first two sections are like twin brothers, intermingled yet separate. As for the third section, Electric Eyes, if you have ever felt the startling contact of electric eyes, there is no need for me to explain. If you have not, mere words will never be enough.

The piece was composed using the CLM sound synthesis and processing environment written by Bill Schostaedt, and running on a NeXT computer, to which I added a custom unit generator that did the four channel spatialization. The processed sound materials are sampled tubular bells, cowbells, cymbals, gongs, knives and screams, and the synthetic sounds are created through quite simple additive synthesis instruments.

Additional notes:
A lot of wonderful music was created in the Samson box, which could do quad, but I arrived too late at CCRMA to use it. The NeXT computer was only stereo, but as I really wanted to work with more channels I designed and built the QuadBox (together with Atau Tanaka, and while I was working in Japan). Four high quality DAC’s that could be connected to the DSP port of the Next, and an Objective-C/DSP 56000 assembler program that could transport digital samples to four speakers. I never went back to stereo.

Cyberpunk Sanjo (산조) | Soohyun Kim

Computer music meets Korean traditional music. Sanjo (산조) is a Korean traditional music style which involves two players, one on a melodic instrument and the other on percussion. Known for its improvisational nature, it is often compared to Jazz jam sessions in Western music. It also entails a musical conversation between a melodic instrument player and a percussion player. In this performance, Kim plays his own melodic computer music instrument in the style of Korean traditional music. Using a GameTrak controller, his instrument is designed to express the essence of dynamic vibrato and pitch bend of Korean traditional music. And what he is sitting across from is a “ghost” computer player who provides the percussion component. This Sanjo performance is thereby presented in a form of fusion with computer music, which is unprecedented in Korean traditional music history.

PCR | Josh Mitchell

I wrote the guitar part here in 2017-ish, about an area near where I grew up called the Purisima Creek Redwoods Open Space Preserve. That’s what “PCR” has always meant to me, but starting in 2020, another meaning for this acronym has slowly overtaken it. Polymerase Chain Reaction methods are a powerful tool for quickly diagnosing infectious diseases, and they’ve been widely used in COVID-19 tests. This new version of the song is inspired by that shift in meaning, as well as by how languages change over time in general.

Towers of Hanoi | Andrew Schloss

The idea for this piece sprang from an exercise in recursive programming. The entire structure of the piece (except for the sustained tones) is generated by a recursive algorithm that solves the Towers of Hanoi puzzle. The problem is to move a tower of graduated discs one by one to another site, moving only one disc at a time, and always putting a smaller disc on a larger one. If the puzzle is done optimally, it takes exactly 2n − 1 moves to solve, where n is the number of discs. Here, the scheme is recreated with sound: the “discs” are sequences of discrete pitches, and the “towers” are different timbres. In this piece, there are three instantiations of the process that are concurrent and occur at different rates.

Alloy Resonator | Kimia Koochakzadeh-Yazdi

Co-designed by Kayla K.Yazdi.

a light stung the darkness | Engin Dağlik

—And then?
—And then the sun shining through a small opening, the sparrows pecking at my windows, and the bells mumbling an antiphon in the clouds woke me up. I had a dream.
—And the devil?
—He does not exist.
—And art?
—It exists.
—But where?

“Aloysius Bertrand”

Recorded with pianist Carolina Santiago.

Under the Surface | Barbara Nerness

I have always been a chameleon
shapeshifting
into acceptable forms

When it isn’t safe to be who you are
look for me
under the surface


Taqsim / The Humanity of Arabs | Matt Wright

In a time when it seems difficult for some even to concede the humanity of certain Arabs, this piece defiantly and gratefully draws from Arabic musical materials to imagine a better future. All electronic material is generated and controlled in real-time using sounds recorded in real time. The instrument is the oud / العود, the fretless short-necked Arabic lute, indeed the origin and namesake of the “lute” (with the misunderstood definite article “al” / ال from “al oud” / “the oud” giving us the “L” in “lute”) and the tonal material and phrase organization draws from the Arabic maqam / مقام system. This piece is dedicated to all Arabs and all those who recognize their humanity.

back to top



ABOUT THE ARTISTS

Engin Dağlık is an artist and musician who essentially enjoys to explore anything related to sound and music.

Soohyun Kim is a second-year master’s student and incoming PhD student at CCRMA, Stanford University, whose primary research interest lies in human-AI interaction design for new music performance. He is also a music producer and recording/ mixing engineer trained in South Korea, who participated in multiple popular music production works. As a musician, he is a guitarist and singer.

Kimia Koochakzadeh-Yazdi is an Iranian composer based in California. She writes for hybrid instrumental/electronic ensembles, creates electroacoustic and audiovisual works, and performs electronic music. Kimia explores the unfamiliar familiar while constantly being driven by the concepts of motion, interaction, and growth in both human life and in the sonic world. Being also a cross-disciplinary artist, she has actively collaborated on projects evolving around dance, film, and theater. Kimia graduated from Simon Fraser University with a Bachelors of Fine Arts in 2020 and is currently pursuing her DMA in composition at Stanford University.

Fernando Lopez-Lezcano was given a choice of instruments when he was a kid and liked the piano best. His dad was an engineer and philosopher, his mother loved biology, music and the arts. His background includes both music and engineering, and he thrives on a balanced diet of art and technology. He throws computers, software algorithms, engineering and sound into a blender and serves the result over many speakers. He can also hack Linux for a living, and likes to pretend he can still play the piano. Over the past few years he has returned to his roots, and has been working on developing a performance practice that uses custom modular synthesizers for realtime performances. He is happiest in the middle of his Dinosaurs and making lots of noises.

Josh Mitchell is a musician and researcher from Half Moon Bay, California, and is currently pursuing an M.A. in Music, Science, and Technology at CCRMA. His combined research, performance, and compositional focus is on modeling nonlinear systems, with a particular emphasis on nonlinear feedback and chaotic dynamics. The end goal behind this focus, however, is and will always be teaching physics to musicians and music to physicists!

Barbara Nerness is an artist, researcher, and PhD candidate at Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA), also pursuing a certificate in Composition. Her research focuses on brain dynamics measured by EEG during music improvisation, and she also writes, performs, and improvises her own music inspired by sounds of the body (heartbeat, breath, brainwaves). Barbara enjoys collaborating with other artists, especially projects investigating surveillance and technological subversion. She has performed at venues throughout the Bay Area and Los Angeles, as well as the Brooklyn Academy of Music, New York, ZKM (Center for Art and Media), Germany, and the Sonic Arts Research Centre (SARC), Ireland. She holds an M.A. in Music, Science, and Technology from Stanford University and a B.A. in Mathematics from UC Berkeley.

Andrew Schloss is known primarily as a performer, improviser and virtuoso on the radiodrum, an instrument based on Max Mathews’ radio baton, but optimized for percussive gesture-sensing. Using this instrument, he has created new works and collaborated extensively with composer David A. Jaffe on numerous musical projects involving both acoustic and electronic sounds. In addition, he has explored the combination of electroacoustic music with Cuban jazz, performing extensively with Cuban pianist Hilario Durán, as well as maestro Chucho Valdés, Ernán López Nussa and Jeff Gardner. In public art, he has collaborated with Trimpin, Nobuho Nagasawa and Buster Simpson. Schloss was a Fulbright Scholar at IRCAM per invitation of David Wessel in 1987, which is when he began working on the radiodrum combined with the very first version of Max/MSP. He studied at Bennington College, the University of Washington, and Stanford University, where he received his PhD in 1985, working at CCRMA. He has taught at Brown University, UC San Diego, The Banff Centre, and currently at the University of Victoria.

Dr. Matthew Wright is a media systems designer, improvising composer/musician, computer music researcher, father of an energetic 6-year-old, and CCRMA’s Executive Director and acting Technical Director. His computer music career began in 1990 in a class from David Wessel at arch-nemesis UC Berkeley’s Center for New Music and Audio Technologies (CNMAT), where he joined the staff as a researcher from 1993-2008, before and during his CCRMA PhD. He later worked at the University of Victoria and UC Santa Barbara. His research has included real-time mapping of musical gestures to sound synthesis, helping develop and promote the Sound Description Interchange Format (SDIF) and Open Sound Control (OSC) standards, computer modeling of the perception of musical rhythm, and musical creation with technology in a live performance context. As a musician, he plays a variety of Middle Eastern and Afghan plucked lutes, Afro-Brazilian percussion, and computer-based instruments of his own design, in both traditional music contexts and experimental new works. Luna’s curiosity led her to the world of cave acoustics, where she embarked on expeditions to uncover the secrets of ancient and inaccessible soundscapes. In her second year of pursuing a Ph.D. at CCRMA, her research explores the intersection of acoustics, audio technologies, music, archaeology, and natural underground architectures.

back to top


Many thanks to all those in the CCRMA Staff and Bing Staff for helping to produce this concert!

The program cover art was created partially with an AI image generator from the poster of the 1974 film "Dark Star".


STANFORD’S LAND ACKNOWLEDGMENT STATEMENT

Stanford sits on the ancestral land of the Muwekma Ohlone Tribe. This land was and continues to be of great importance to the Ohlone people. Consistent with our values of community and inclusion, we have a responsibility to acknowledge, honor, and make visible the University’s relationship to Native peoples.


Center for Computer Research in Music and Acoustics

In 1964, while pursuing graduate studies with Professor Leland Smith, John Chowning began the work in computer music at Stanford using Music IV with help from Max Mathews of Bell Telephone Laboratories. Initial experiments were carried out with the help of the Computer Science Department on their time-sharing computer system. Together, Chowning and computer science student David Poole put together the first on-line computer composition and synthesis system, with technical help from Computer Science and Electrical Engineering. As a result, John Chowning wrote the first programs for moving sound sources through a four-speaker space.

In 1966, the Stanford Artificial Intelligence Laboratory moved to the D.C. Power Laboratory Building on Arastradero Road. It was at this same time that Chowning joined the music faculty teaching music theory and computer music, and the first course in computer-generated music was offered. Exploratory work on musical timbres began in 1967 and led to the discovery of the use of frequency modulation (FM) for sound synthesis by John Chowning with the help of David Poole and engineering graduate student George Gucker. The technology was commercialized by Yamaha Corporation, resulting in the DX-7 (1983), the first commercial digital music synthesizer.

Early compositions from CCRMA included: “Sabelithe I” for sound and 3 performers by John Chowning in 1966 (never completed due to the Artificial Intelligence Laboratories move to the D.C. Power Laboratory Building); “Rondino’ for stereo tape, by Leland Smith in 1968; “Pour” for sound and recorded voice, by Martin Bresnick in 1969; “Fragment” for stereo tape, by Martin Bresnick in 1970; “Sabelithe II” for quad tape, by John Chowning; “Machines of Loving Grace” for bassoon and narrator with stereo tape and “Rhapsody for Flute and Computer” for flute and stereo tape, by Leland Smith in 1971; “Turenas” for quad tape, by John Chowning in 1972; “A Little Traveling Music” for amplified piano and quad tape, by Loren Rush in 1974; a realization of Robert Erickson’s “Loops” by John Grey in 1974; and “Song and Dance” for orchestra and quad tape, by Loren Rush and commissioned by the San Francisco Symphony in 1975.

Because of their growing reputation, members of the computer music group at Stanford were asked by Pierre Boulez in 1973 to participate in the planning stages of his music research institute being formed as part of the Centre Pompidou in Paris. In August 1975, the IRCAM group came to Stanford to participate in a special workshop on computer music. The research relationship and exchange between the two centers has continued over the years.

In 1974, John Chowning, Loren Rush, John M. Grey, and James A. Moorer submitted an application to the National Science Foundation (NSF) to support research at a new Center for Computer Research in Music and Acoustics (CCRMA). Other funding included a gift from Mrs. Doreen B. Townsend and a grant from the National Endowment for the Arts for computing equipment for musical purposes. To be able to speed up music synthesis, CCRMA commissioned a real-time digital synthesizer from Systems Concepts designed by Pete Samson (called the Samson Box) which came online in 1977. Although a part of the Music Department at Stanford, CCRMA continued to share facilities and computing equipment with the Stanford Artificial Intelligence Laboratory (SAIL) of the Computer Science Department. The founding co-directors of CCRMA were faculty members John Chowning and Leland Smith and research associates John M. Grey, James A. Moorer and Loren Rush. The first computer music concert (“An Evening of Computer Music and Film”) was held August 10, 1976 at Dinkelspiel Auditorium and in 1978 CCRMA presented a concert of computer music at the Stanford Museum of Art.

back to top



CCRMA LIVE Archive   |   Upcoming Events