Juan I Reyes

Research:

Cycles, symmetries for music event sequences and composition with pitched sounds

Have done more experimenting -and on the subject- context based on Dallapiccola's tone rows and heuristics. Mostly, it summarizes on approaches for methods by means of computer aided composition and mathematical modeling. The Lyric Suite was a departure for using tools at hand and others developed ``on purpose,'' for working with cycles and symmetries of PCS's. At the same time, results -also on the subject- have made their way into Do_Marin-TA and Arch Carrellage, which seem completed works for now. More trial-and-error on composition with first-and-second order Ambisonics, in addition to Ambisonics reverb, have also been heard implemented on these couple of pieces. Though, still waiting for more testing of these features on different multi-phonic and multi-speaker layouts. Additional results show that subtle sound-source motion provides good results while delineating spatial acoustics composition. Static sound sources placed ``in'' the sphere, give a 3-D space perspective because of their behavior as a ``sound object''. Further on this path, have found options for blending independent characteristics sprout among atonality, micro tonality, twelve tone tonality, and symmetries, on efforts to frame a language for gestures and identity. [2019-06-12]

Complementing research on ``serialism'' and ``twelve tone tonality'': symmetry and combinatorics.

(On-going)

Symmetry is a familiar idea that seems an innate feature of human perception. Perhaps a simplification of pattern recognition or just an informal description of regularities of shape and structure. Here we are on the search for translations and transpositions of rigid motions such as rotations, reflections, glide reflections and cycles of patterns and groups of music parameters such as notes, pitch class sets, motion pannings, and others. Interestingly enough, these also point to perception of symmetrical features of sound aside from periodicity, in addition to similarities of musical patterns and groups that tease cognition. Implementation of tessellations to achieve musical shapes and form, tone rows and vertical intervals on twelve tone tonality has been more than a starting point. Is it possible to map tessellations to time domain parameters such as intensity mapping?. If there is a geometry of space where sound sources are placed, there might be a chance for some sort of tiling that fits inside these constraints. Cathedrals such as Saints Peter and Paul (Philadelphia) show more than a handful of symmetries and tessellations that serve as good sources for mappings to sound.

Serialism Revisited

While dealing with physical models of tuned and acoustical instruments "pitch" becomes relevant. Furthermore avoiding diatonic sonorities is a -must- on new musical expression. Therefore the question of spectral development rather becomes a "pantonal" development of sound sources given processes of pitched acoustical events. This has been proven by followers of serialism schools for years now. Should acknowledge performances of Bruno Maderna's "Quadrivium" and Luigi Nono's "La Lantanza nostalgica utopica futura", testify the above. Examples along a line from Schoenberg and Webern to Boulez and Maxwell Davies also abound.

At this point the aesthetic paradox of the "new" in a work of art arises: are compositional techniques of the past century still current?. Is music conceived with serial techniques a novel expression?. After digging the subject several documents and dissertations, outline a "yes" on most respects. On the other side of token, theory and science behind, suggests that exploration around serial heuristics remains to be done. In particular, if we want to deal with matrices, subsets, combinatorics and by the way, Ferneyhough's funnel matrices.

The above itself is sufficient motivation to face old and current serialism. Consequently, implemented algorithms on Rick Taube's CM3 to generate rows and magic boxes. Tested Maderna's tone rows and translated them to MIDI files and note sequences for Snd's S7. Pasted some of these melodic patterns to get various combinations and textures. To a great extend a lot of this generated data was used to sprout gestures used in a composition for the physical model of bowed strings titled "Bowdoin Sketches". [11/30/2016]

Heard this on BBC: "perhaps pantonal developments are not so well understood yet since their perception might not seem honey to the ears of some."

- Spontaneity on improvisations real time performances -

(On-going)

On the quest for the "new" (or newness) on music and arts. While many works are frozen on time, particularly on fine art contexts, why the viewer or spectator discovers nuances as a work is being perceived, say a music performance, an installation, or an sculpture. While it is known that "new" constructs are produced on the mind as it gathers new or different connections, as those held on prejudices and expectations, changes are perceived as discoveries for a gratifying mind. -Once there is a discovery, there is a reward-. Spontaneity causes change in prejudice and expectation, consequently if there is a variation, chances are they are perceived as something new on the mind of the spectator.

This description can be portrayed on improvisation like in Jazz and other performing arts. Therefore the improvised work might always be new. Conversely in a "piece" or "object" of art frozen in time, spontaneity is held on the viewer's side. This because of what the spectator is discovering while exciting the senses on the perception of a form or expression. Therefore sounds logic that "I am Sitting in a Room", "4.33", "Four6", "In C" and "Turenas" will always be new, and subsequent hearings will also hold a taste of new content to be sensed and perceived. More on the subject on Juan Reyes' "Canciones de máquina, los roles del intérprete y el instrumento musical", Revista Trilogía, Instituto Tecnológico de Medellín. There was a talk on this subject on the 2013 Caribbean Art Congress at Cartagena, Colombia.

Music Performance and Machine Interaction

Though tape music has provided a great deal on many respects including a composition standpoint, real-time performance and live interaction has also been very appealing. There is nothing compared to traditional instrumental performance. Furthermore there is the option of machine-performer interaction whereby machine listens to a performer or otherwise a performer plays with the machine. Miller Puckette and Cort Lippe's pioneering research on score following while at IRCAM, gives cues on how to tackle the subject. Highly motivational was a performance of Gandy Bridge XI by the Convolution Brothers using an ISPW, SGI, ISDN and who knows what else at ICMC-1997 in Thessaloniki.

Real-time systems were out of reach to the common terrestrial until late nineties, many of them using MIDI interaction capabilities, and very few signal processing. The Boie-Mathews radio drum was a paradigm for new instrument research and development. A key issue on its design was the use of software to enhance many of its features. This path led to development of the "radio baton" which indeed was software written by Max Mathews himself which paved the way for live-interaction packages like "Max" and "Pd". Aside from being a great didactic tool Pd shines as a great interactive system. Being so, and in contrast with tape music or mediated performance, new music quests were updated from searching for optimal spectra and timbres to levels of interaction between performers and machines. Pd inherited a lot from ISPW's research on score following and live interaction.

Artist-composer Felix Lazo and I chit-chat around "new music quests" at Ibirapuera in Sao Paulo, while solving points on music interaction. He pointed out Rowe's Machine Musicianship as a seminal work on live performance computer interaction. I had based previous research on the subject also on Rowe's Interactive Music Systems, and on Teresa Marrin's pointers related to Todd Machover's Brain Opera in addition to her research on the "conductor's jacket" and the "digital baton". To be fair to my conscious and after attending one of Dick Duda's talks on conditioning sensor data at Stanford, I realized more digging into topics like "pattern recognition" and "machine learning" needed to be done.

Felix kept on insisting the issue of machine live-performer interaction, and thus persuaded me on working with Super Collider. This led me to port previous work I had done with Craig Sapp searching out on-the-surface interaction while toggling initial conditions on dynamical systems. At the time we had used Craig's interaction system Improv on radio batons, diskclavier and qwerty keyboards. Some of these ported applications have been used in Horace in San Mateo, a composition for live interaction performance using Super Collider.

As stated an important path for understanding music performance and machine interaction is score following but crucial as well are points on David Jaffe's article "Ensemble Timing in Computer Music", Computer Music Journal, Vol. 9, No.4 (Winter, 1985), pp. 38-48. On the performance side Marvin Minsky's article "Music Mind and Meaning", Computer Music Journal, Vol. 5, No. 3 is encouraging and hasproven useful for telematic expression and teleconcerts.

Should also point out that Max Mathews, Bill Verplank and I very often discussed the real-time issue. In fact Max persuaded me on composing for live performance because of haptics, expression and control of gestures. He kept on insisting that part of magic in music dealt on how music was performed. And not so much focusing on sound and acoustic parts. In this spirit Feather Rollerball, a composition for live piano, radio baton and Scanned Synthesis came into being.

- Scanned Synthesis -

(On-going research)

In the late nineties the word "haptics" will often come out on conversations all around CCRMA's Knoll corridors. However the term was introduced to me by John Chowning while referring to Sile O'Modhrain's projects and research on his first visit to Andes University in Bogota(circa 1993) [read more...].

- Banded Waveguides -

Work on biquad filters (recall that these are generally two-pole two-zero filters), and resonant filters for a Banded Waveguide model as per Essl, Serafin, Cook and Smith paper CMJ, "Theory of Banded Waveguides", Computer Music Journal, Spring 2004, Vol 28, No. pp 37-50. Used Cook and Essl's algorithm to get bowed bars sounds as well as glass friction sounds. Wrote Lisp and S7 code for bandedwg.cms on CLM. On CLM-4 a "Firmant" filter generator is more desirable than BiQuads. The "Firmant" filter generator on CLM is Bill Shottstaedt's version of Max Mathews' Very Hi-Q Two Pole filters . Results of this research can be heard on sounds of singing Tibetan bowls on Open Spaces, bowed bar on Os Grilos, TikiTik and in Horace in San Mateo. A paper "Working with Banded Waveguides and Friction in Musical Contexts" was presented at WOCMAT-2006, NTNU, Taipei, Taiwan ROC.

- Using Lissajous Figures while Manipulating Intensity Panning -

On a visit to Bogota Mesías Maiguashca introduced a method for intensity panning using a Lissajous patterns in a composer's workshop. After some digging, it turns out that John Chowning had used this scheme on Turenas. I apologize for not knowing this detail when Maiguashca asked me. I had used Fernando Lopez-Lezcano Dlocsig for this purpose. But my conscious bother me for a while and I decided to tackle the problem myself using Dick Moore's ideas on his paper "A General Model for Spatial Processing of Sounds", Computer Music Journal, 7(3) pp 6-15, Fall 1983. Pablo Cetta also assisted me on this issue. His doctoral dissertation "Un Modelo para la simulación del espacio en música further explores the issue but his ideas are also based on Moore's theory.

As it happens, 'intensity panning' works to some extend but it doesn't give complete information for localization cues on moving sound sources. Necessary information for perception of tri-dimensional aural environments should contain frequency shifting as well as inter aural cues (which can be obtained using HRTFs). In addition to intensity panning, hereby achieved by using Lissajous formulas, frequency shifts (Doppler shifts) are still needed, even on 2D plane motion of sound sources. For a practical application of this method inter aural cues can be left out leaving some precision left out. On signal processing Doppler shifts can be obtained with a time varying delay line.

If used in a box-in-a-box context as proposed by Dick Moore or in a ball in a box metaphor as proposed by Davide Rocchesso, Computer Music Journal, 19(4), pp 45-47, 1995, different motion patterns can portrait illusions of natural movement as that on flying birds or insects. Several CLM instruments have been developed by this author for this purpose. Chuchoter and Os Grilos are multiphonic works which make use of this diffusion scheme also using second order Ambisonics. This research was presented at IV Seminário Música Ciência Tecnologia: Fronteiras e Rupturas, Department of Music - ECA, University of Sao Paulo.

- Leslie and Time Varying Lines -

The Leslie effect heard on the Leslie Speaker, has spatial motion properties while listening to a sound source. In particular Hammond B-3 organs add expressive cues to a tone which is static on its duration envelope. Great organ performers like Jimmy Smith, Lonnie Smith and the Deep Blue Organ Trio, make clever use of this effect. Because of frequency shifts due to moving speakers, this effect effect gives the illusion of not only one sound source but several. At slow rotating speeds, there is the effect of a moving source, while at fast rotating rates, there is the effect of a chorale with multiple sound sources. Time varying delay lines are used for various effects and simulation of moving sound sources. A physical model of the Leslie speaker was developed using parameters and the algorithm suggested by Julius Smith, Stefania Serafin et al, in "Doppler Simulation and the Leslie", on the Proceedings of DAFx-02, 5th International Conference on Digital Audio Effects, Hamburg Germany, September, 2002, slides here. A ChucK application of this model is available as well as CLM and S7 versions like "leslie.cms". More about these implementations [here].

- Telepresence - Teleperformance -

Following footsteps of Juan Pablo Cáceres and Chris Chafe, experiments were done on several universities in Colombia to test connectivity with other centers like USP, Universidade de São Paulo in Brazil, UC-Irvine and Stanford in the U.S. Every year latency as well as up-and-down symmetry of network speeds have improved. Advanced networks in Latinamerica are connected through Red Clara. In Colombia these networks go to Red Clara through Renata. Most Colombian higher level institutions are hooked to Renata. Availability of these networks has provided for a realistic vision on how to search for new expressions that make use of telematic procedures. New works exploring this medium have sprout worldwide but locally as well.

Given that latency is a major factor, works that appropriate this issue are now beyond its dawn an thus have been rehearsed and performed. Works have taken the form 'teleconcert pieces', but there have been audio visual installations in addition to corporal and body-art expressions featuring telepresence. While many technical difficulties have been overcome, bureaucracy still an issue, not leaving aside plastic, creative and performance challenges. Tele-espacios Abiertos is a composition for video and teleperformance with musicians on different geographical sites. TikiTik tackles the quest for spontaneity while improvising on top of a multiphonic texture on several remote sites. A paper dealing with group interaction for ensembles and telepresence has been published as Performance e interacción con ensembles y telepresencia, Clave19-07, No.5- 2012 Arts Dept, Los Andes University, Bogotá.

- Dynamical Systems -

Dynamical systems are a good and feasible alternative for algorithmic composition. Chaos and fractals sketch natural forms on every sense and therefore are treated as building blocks on many compositions. Alfredo Restrepo, an electrical engineer at Los Andes University, first introduced me to chaos while demonstrating his approach to Teager's filter commonly used in vision applications. Subsequently Craig Sapp and I worked on attractors like the Henon Map, Logistic map, Teager and even the Lorenz Attractor. Some experimentation and succesful results using Craig's program Improv was further implemented on Max Mathews' Radio Baton and later on SuperCollider and Pd. Tried examples on CCRMA's "Diskclavier" using real-time interaction. Dynamical systems of this sort can be heard on Horace in San Mateo and those "pretty eyes". Some of the approaches here can also be used in sonification, auditory display and audio remapping of data. "Señales musicales con sistemas dinámicos y frecuencias hápticas" is a paper reflecting this research and published on Proceedings of IX-STSIVA-2006, IEEE Chapter of Colombia, Universidad Javeriana, Bogota.

- J.C Risset's Rhythmic Paradoxes -

Risset's rhythmic paradoxes work on the time domain. Its description is similar to Shepard tones but instead of working with partials and spectra, here we have continuously changing tone durations which give the illusion of an everlasting crescendo (or decrescendo). Wrote CLM instrument to experiment and achieve these sounds. Marimonda Sketches, EXpySaxn and Trax Pong, make use of J.C. Risset's rhythmic paradoxes.

- Art and Gesture -

New expressions and intermodality (aka synesthesia) require a search on new grammars and semantics. In fact pattern recognition and machine learning depends heavily on this issue. Based on modes of interpretation, both from a performerś side or on the listenerś side, a lot can be gained by playing an instrument. Languages for translating performance codes transcend to lively and time domain forms of expression including fine arts. By searching for symbols an artist on any domain can help constructing a language for a piece or a work of art, even if its frozen on time. Gestures contain symbols that mean something for the viewer, the listener and for creators as well. This research lead to several courses At Los Andes University and others. The subject is also explored on James Delgado dissertation thesis at Caldas University: Diseño de ambigüedad: la percepcion ambigua como estrategia para el concepto de doble conciencia del artista medial Roy Ascott a propósito de la realidad ciberespacial, 2010. Some works from the art and gesture course can also explain this issue, here.

- Haptics and Tactile Arts -

Several courses and workshops have came out of this research. Following the ideas of Bill Verplank and Max Mathews, tactile feedback is crucial for controlling "the tone" of an instrument. On electronic and computer renditions of instruments, sensors are used to collect haptic and tactile information data. Haptics is perception through the sense of touch. This kind of manipulation is prone to generate body gestures and expression to a musical performance. Several papers have been written on this subject: "Estímulo Musical: Influencias con Afectos y Emociones en lo Sonoro", Interpretación del gesto: escucha. tratamiento y reacción en sistemas interactivos, Clave 019-97, No. 2, 2009 Dept of Arts, Universidad de Los Andes, Bogotá. More on the object and subject of this research on class notes of the art and gesture seminar. Also on this Elements and Introduction to Physical Interaction web page.

- A tentative Physical Model of the Bowed String -

Worked on implementing research on the acoustics of the bowed string model primarily done by Julius Smith, Stefania Serafin, Davide Rocchesso et. al. This model added waveguides and scattering junctions for torsional and longitudinal waves which are present on stringed instruments. See Serafin's "The sound of friction: real-time models, playability and musical applications", doctoral dissertation and, Serafin s., and Smith J.O."Impact of string stiffness on digital waveguide models of bowed string", Catgut Acoustical Society Journal, vol. 4, No. 4. The code initially tested on MatLab was translated to CLM. Like most Physical Models, its usefulness depends on the initial conditions of its parameter. Several regions on the octave work better than others. The sounds of this model have been used in Freddie the Friedlander and on a new version of this composition called FtheF.

Vintage:

- Estímulo Musical: Influencias con Afectos y Emociones en lo Sonoro

- Fundamentos de Sintesis de Audio con Frecuencia Modulada

- El Artista como Hacker: Adapatando y Apropiandose de los medios digitales

- Research: MIDI Improvements in PlanetCCRMA

- Additive Synthesis by Subtractive Resonant Filters

- Strad.ins a Bowed String Implentation in CLM

- Research: Composing for the Physical Model of the Maraca

- Parameter Manipulation for Composing with Physical Models

- More on Musical Expression in Tape Music Composition

- Estados Alternados: reflexión sobre arte, ciencia y tecnología

- CCRMA @ The Age of Noise Festival de Los Tiempos del Ruido

- Wadi-Musa & ppP: composiciones realizadas en ambientes ``Open Source''

- Reconocimiento de Patrones: Una aproximacion para Expresion de Instrumentos y Arte Interactiva

- Composition: Los Vientos de los Santos Apóstoles: a four cd player and virtual organ piece

- The Influence of Text in Computer Music Composition: An Approach to Expression Modeling

- Modelos de Expresión Musical Basados en Análisis y Tratamiento de Señal de Audio

- A Proposal for using SMS Files for Expression Modeling

- Education with Computer Music in Colombia

- Expression with Algorithms at Los Andes

- Using Spectral Modeling as an Intuitive Approach among Colombian Composers

- The Choice of Electro-acoustic Music in Colombia

top↑
Photo of Juan Reyes

Juan Reyes is a composer and researcher whose works tackle on computer and electroacoustic music elements, their conception, processes and craft. His research is aimed towards semantics of gesture and perception as well as novel ways of performance and expression.