Recent:
- A handful of new compositions can be listened: «HERE»
Like a Function of Past Values
Reflections from a talk given to music students, performers and artists with bare bones technical backgrounds several days ago.
Very often ideas on Physical Models of musical instruments (PMs) crop up conspicuously when focusing on expression, real time synthesis of sound and in like manner when explaining they are employed on more than a handful of our compositions. Corresponding conference papers most often get commented or questioned on why researching or applying this method, given that real instruments or even a ``black boxes'' seem more feasible and are always at hand. Every time our response is not necessarily so!. For some seems difficult to understand that in order to get a sound or texture, experimentation is needed. Thus, having a real instrument at hand is not always achievable. Even working on Piano, Mallets and other tempered keyboards like Organs and Accordions, gets trickier when using micro tuning and temperaments such like Bohlen-Pierce and Carlos. Nevertheless, dealing with models seems an endeavor and resembles research. For instance, PMs can be tuned any way you want, but be aware of unusual (seldom surprising) resonances and response plus results on one direction might be rewarding while on the opposite frustrating and discouraging. On occasion initial conditions are controlled but routinely may be not. Thereupon, not hiding or denying facts, we occasionally end up parodying our antagonistic peers saying ``why electronics are we doing if natural acoustics are great'', or precisely indeed, ``is signal processing worth pursuing''?. However, paraphrasing naive visionaries about this kind of pursue,``on a drive to uncertainty, eagerness and awareness are essential to accomplish discovery''. Recall, we are always -experimenting- and a negative finding doesn't necessarily block all profiles. Likewise, we get in the loop and keep experimenting even if ideas are not to taste or dumb.
My story on the subject of PMs goes back to the days when we were looking forwards to find out about next big thing in sound synthesis. ``Next big thing'', meant at the time what was coming after FM Synthesis and the Yamaha DX-7 synthesizer. Remembering an after-lunch conversation amid friends colleagues and engineering PhD's on Tressider at Stanford, when on our head's thoughts-and-ideas collided while listening into what is known as Karplus-Strong plucked string modeling, along with reed-woodwind pipes acoustic schema. They pointed, “by simply working out functions and balancing wave equations you get your instruments”, meaning that an algorithm in a software then known as``Mathematica'' will generate a file with sound data resembling acoustics of instruments such as guitars, flutes or clarinets. This file could be later heard using an audio digital-to-analog converter on not so widely available computer workstations. Later we officially learned that a ``model'' of an instrument will embody all its qualities including excitation and resonant body. Methods for obtaining these sounds will depend on several variables beside shape and material. Excitation on an instrument conforms to dealing with random functions and numbers. On theory every parameter in a modeled instrument can be controlled by variables in its algorithms not only constraining into pitch and amplitude but into spectra, vibrato and tremolo, just naming a few. Further on our imagination went as far as changing dimensions of bodies plus exchanging excitation and bodies. For those more well versed these ideas carried on to embedding these programs into integrated circuits so that they perhaps can be played on real time. Just months after this gathering, Toshifumi Kunimoto on his K's Lab at Yamaha in Japan came out with an instrument named as VL1 that modeled reed rigid bore instruments on real time and could be effectively performed using a reed embouchure interface or rather a keyboard. Then after they came out with the VP1 polyphonic synthesizer, modeling not only reeds but Karplus-Strong plucked string among others. This became a breakthrough achievement for Yamaha, though not a big seller, and carved in stone the words ``Virtual Instrument'' for the way its sounds were generated.
Back in this century a friend commented, ``you are mentioning stuff in
past tense and not looking forward to the next-great-thing future is
bringing''. Therewith he pointed out that we like remembering because
age is taxing us, and we don't want others to fall into same traps we
fell. Well, PMs came, some are successful, but to a great
extend as pointed out before, they are greatly misunderstood, hard to
decipher, and have deep learning curves. Real time performances of
models demand familiarity and practice time as function of their
interfaces but hopefully interesting sounds and discoveries might
happen to the spirited performer. Human-instrument interfaces might
need to be tweaked becoming part of inventive of creative
processes. Whether looking forwards is an end or sticking to tradition
is the objective, expression playing an instrument is always at
stake. We like new sounds and modes of expression hoping we are going
to tell what has never been said. Yamaha's VP1 is an
all-purpose solution to the question of virtual instruments inside a
real instrument interface. On perspective is not the best solution but
it encourages exploration on how we would do thing
differently. VP1 could be played and programmed just like
traditional instruments, most PMs still question how a
gesture could be achieved. Getting trapped in a labyrinth on a dead
end is frustrating and a lot give up. Likewise puzzles provide
probability windows to view a new thing. Quixotic postures promote
easy and limited solutions that might be practical on near terms but
in the long run might lead us to a trap. Perhaps a reason for people
preferring sampling over synthesis is one of these dead ends. A
practical solution for now but not so intriguing and creative.
But if people still objecting modeling of acoustic phenomena, there is
a chance of light on models of electrical circuits that take acoustic
or musical signals. We first grasped at the idea while advising a
dissertation on modeling acoustic spaces. This lead to modeling
reverberating rooms by getting their measures and applying them to
delay lines using mappings of traveling of sound waves in a space. On
the Michigan computer music conference came into terms on modeling a
Moog analog filter circuitry, and then again at Stanford's
CoHo, our friend Gary Scavone illuminated on the
idea of reverb models and compressor models being developed by the
music industry. On one summer at CCRMA, got hands on
the Leslie speaker model that Patty Huang
and Stefania Serafin were researching and developing at the
time. With the advent of music production workstations and digital
signal processing on commercial gear, Toshifumi Kunimoto and
his team at K's Lab saw opportunity for developing models
again and came with the notion of Virtual Circuitry Modeling
(VCM). Their models of outboard processing gear became options and
even parts of Yamaha's digital mixing consoles. Modeling of
music equipment outboard gear is also known as black-box
models. Difference over hardware (or rather improvement) at least on
theory is that these models are transformed into programmable
white-box models.
A black box is a device, system or gear with
a single purpose, inputs, outputs and means to tackle on its
parameters. A white box could be a multi purpose device with
programming capabilities.
This is the case of modeling of
vintage synthesizers like the Moog Mini-Moog or gear like
DBX-160 compressor,
Eventide Harmonizers
and
Lexicon 200 series reverb also known as hardware plugins.
Widely accepted, even on academic circles, there is
'nostalgia' on the music domain everywhere. When we listen to a
song, we evoke past emotions and given so, we hope for the next big
hit from artists we already enjoy. In other words, we like to
interpret (by making sense) the most accurate representation of a
moment we had, even if that moment was just a sound. We go to a great
extend replicating moments we had as images in our mind by
exteriorizing the most we can. We find ways for memorizing not only in
our minds but as impressions in bins of media as well. Memory,
representation or transformation processes outsoar becoming more than
enough motivation for paintings, photography and recordings of
singers, Opera and Orchestras as well. Like they say,``Let's
capture every essence of a moment'', so that we can evoke it later
on. Thus wise, and avoiding going against the flow, we are here
acknowledging that acts of memorization by capturing and reproducing
moments are intrinsic to our age and culture. To the point where more
music is broadly listened through mediated recordings instead of going
to concert halls or attending live performances. Consequently, our
brains are teased on perceiving these reproductions as ``real
live'' performances without any validating or questioning what
they context and origin.
Recollecting ``The essence of a moment'', like an instant in
a photograph, a breadth taking moment that we want to immortalize on
media or whatever means, capturing an instant in a flux is also kin to
Physical Models because past is what we want to model in
order to attain what worked before. It's true that in music, future is
tradition no matter how hard we try to derail it. On a different
scope, instantiating on trending and memorization of emotions, used to
tell my students that fashion can be modeled by manipulating dynamical
systems such as a chaotic attractors going to stability, and thus
portraying that they always go back to their deterministic and stable
values. Thus, it doesn't matter how tasteful ice cream flavors are, we
always go back to chocolate and vanilla. Models are good because
there's always good in the past and because ``future values are
function dependent of past values'', like in a basic
low-pass-filter.
[ Fri Jun 28 06:23:01 PM PDT 2024 ]
*Revised [ Thu Sep 5 05:33:39 PM PDT 2024 ]
The only Keyboard is a QWERTY keyboard
Way back (circa
1976), Andy
Moorer prophesied on the advent of ``Musical Sound''
by means of digital and computer manipulation gave the following
remark:
The application of modern signal processing techniques
for the production and processing of musical sound gives the
composer and musician a level of freedom and precision of
control never before obtainable.
Have found through the years that, by means of digital and
computer manipulation
requires dedication and a great
extend of various degrees of constancy and tenacity. We should add
to Moorer's statement that computer's tools for musicians are
widespread and available to anyone, even at no cost provided users
supply hardware. Are we skeptical on the ``levels of
freedom'' machine processing is furnishing us regarding
performance and composition?. As perspective and through
birdseye's lenses circumscribed on past and present paths, what we
bound as digital means is portrayed with levels of
skepticism. Focusing on musical sound, there's still a pile of
uncovered components that need to be unfolded again. For instance,
Bill Shottstaedt's
SND an open source program for sound editing and
signal processing, packs a myriad of elements testifying on the
above quote. Surgical degrees with which a sound could be conceived,
edited, or processed using SND's features is also powerful
and unprecedented.
But as powerful as SND could be, understanding its user's interface requires a bit of experience on math, coding and programming. If we focus on ``... a level of freedom and precision of control never before obtainable ...'' programming objectives can be better lay out. Aside real time processing and live performance, we use SND for everything music and sound wise. It is a sound editor and processor just like a text editor for words. We edit sounds to be meaningful and expressive. We combine processes to make statements and phrasing that listeners perceive. In general we transform a concept to an idea. Even so, our interface with the object, being a sound or word in a text editor, is caged within computer screens and keyboards. A person looking over the shoulders will only see a couple of programs (apps on today's parlance), in addition to a command-line terminal namely, SND and Emacs. So where the heck is the sound?. A frantic music technology professor will point to a sound to loudspeakers on the system.
Giving some perspective,
both Andy
Moorer
and Bill
Shottstaedt come from the days of CCRMA
at SAIL. Those days, when
discrete functions, FFT's, and more were boiling on mainframes at
SAIL. Thereby, sound representations could not be seen, and the only
way of checking a sound was by listening
to it. If
something was wrong, an audio file needed to be rendered again. If
working on a time-sharing machine, this meant more waiting
time. Necessarily, parameters and tools had to strictly conform to
theories that worked!.
SND follows part of this legacy, having inherited form Music-N paradigms, Samson Box, PLa, and CLM. A lot of signal processing methods from the 'golden days of SAIL' and beyond, waiting to be uncovered by SND's users. For some this seems dreadful because like they say ``curiosity kill the cat'' and so SND's user base is not so widespread, ceteris paribus we blame it on approaches to its interface by souls such as ``starry eyed composers''.
But hereupon, we are faced with workflows that presumptuously avoid constraining creativity. On this ground, 'now' generations focus on searching for ``levels of precision'' and thus opting for 'apps' that provide real time interaction by means of tweaking sliders avoiding QWERTY keyboards and commands. If a GUI gives the right sound, we are fine with the sound, (are we listening? ). Albeit, is it worth knowing what is there going under-the-hood while listening a sounding process?. -A now boring question that bothered many on the days of development of computer music heuristics-.
Seems that sound editing is overseen because nowadays it is taken as a 'de facto' mechanism. Users' parlance appears to be bounded on copy-cut-and-paste, although fade-in and fade-out extend their dictionary, in contrast to legacy notions on spectra and envelopes. Consequently, many connote this modus operandi as handling ``sound objects'' by shaping and designing them on a peripheral methodology. Good that sound processing programs that worked before transcend into toolboxes for work in sound design. But we cannot overstate that ``curiosity didn't kill the cat'' because as musicians, we are always searching for not so fashioned semantics. Hence looking under the hood, of course surpasses prejudice by tracing boxes of discoveries.
SND fits every purpose, but here admitting is not for
everyone. It follows a paradigm of open source software whereby
using its features requires reading code and hacking it, as means
for learning and understanding. To a great extend
achieving ``... levels of freedom ...'' demand no regrets
for not leaving anything behind, considering walks on the subject
have been taken before. SND is not just a program, it is a
collection of subprograms, procedures, libraries and more. A great
known hacker's proverb remarks: take Emacs to edit and hack
them to your taste
. Tools are there, regardless of a
procedure or a language. A sound can be conceived, processed or
edited and mixed. No matter how denying a ``musical sound''
as nucleus in a composition might be perceived as preposterous.
Possibly sculpting as a metaphor should be considered as sound hacking components for a composition by means of a machine where the only keyboard is a QWERTY keyboard.
[ Fri Apr 26 04:12:16 PM PDT 2024 ]
Two people thinking are better than one
On gatherings this unforgettable winter, a lot of friends have brought the subject of Artificial Intelligence (AI). Some annoyed because their children or dear ones might lose their job. Further complicating issues, comments on read magazine articles anticipating a likelihood that an unforgivable machine will read government or legal forms, allowing for little or not margin error from our part seems bothersome. Tip of iceberg regarding the subject, a baby gadget taking on pattern recognition in a microchip that will signal a phone-app whether baby is happy or sad. - Are we talking machine intelligent motherhood?.-
Most discussions on AI topics these days end up
admitting that laymen deserve mind relaxation and leastwise
that machines might think better.
Ingenuously, seems
conclusive that derivatives offspring of
AI are staying and thus becoming accessories for
life and well-being. However, thinking on -brain
power- means focusing on multitasking and parallel
processing abilities, in addition to processing speed, given
that our survival instinct features are processing input all
time. Whatever happens on someone's mind still a complex
endeavor. Accordingly, tendency gears toward AI
positioning itself as an extension of the brain.
On mentioned pal exchanges we were confronted with asserting AI examples which might prevail time. A lot of those came around the musical domain, considering we have been on this subject matter for more than twenty years now. Topping the list are Score Followers, automatic comping, analysis of scores and musicological classification, among others. Worth mentioning, models and exponents for gesture sensing and mapping on new musical instrument interfaces, and of course, computer aided composition by means of ``Constrained Based Programming'' (aka. Constraint-based Composition). - A paradoxical concern for musical apprehension arises: will intelligent aesthetics dictate musical taste and further assure top-twenty hits?-. Hopefully, time will tell and thus ruling would be on human perception instead.
AI is proportional to the amount of intelligence we
provide. Inevitably, it follows that some patterns are
perceived logical and others are reasonably deceptive and
thus not necessarily logical. For a definitive
rational ``human decision'', it follows that it
takes precise twisting of illogical patterns to
conspicuously assert a right choice. Consequently, Two
confronting minds are better than one.
[ Tue Mar 19 01:25:04 PM PDT 2024 ]
Voice Textured Sounds: Implications of The Fourier Transform
It's been overt thirty years when reading Dick Moore's tutorial concerning discrete signals and the Fourier Transform (FFT) on the Computer Music journal. Before or at the time, this subject only came across graduate electrical engineering courses, or to computer science people embarking on sophisticated numerical methods. In a few words, a musician coming with these conceptions must be speculating or wasting somebody's else time. But to my conscience this was paramount because knowledge on FFT thereabouts, was a ticket to computer music circles all around. -Though, later found out that only few of us really tackle the issue-.
Couple of friends at Ircam: Camilo Rueda and Francisco Iovino shared more than a enough bibliography, including xerox copies of the Oppenhein-Schaeffer book, hard to get on local college libraries. My goal was to teach undergraduate music majors at junior level basics of a computer as a tool for composition, and endeavor that several of my colleagues were trying at U.S. music school with limited success. On this journey, came across several applications of Fourier theory including, convolution, linear predictive coding (lpc), and the phase vocoder, in addition to using spectra for basic “additive synthesis”. Nonetheless, pretending to do spectral analysis on a Macintosh Plus was very time consuming, even if using 11-bit signals. It would take several minutes to get the analysis and resynthesis of a five seconds sound using 11.025K sampling rate frequency.
Although, focusing out on evoking memories of other times, it is worth pointing that getting so insightful on The Fourier Transform was worth every minute, page or exercise. Should credit Xavi Serra at CCRMA for his dedication to several of us while explaining all gory of details on this subject matter. -Years before, had seen Xavis's presentation of his dissertation at the University of Illinois computer music conference-. Thereafter, I would use Spectral Modeling as opposed to Physical Modeling on several of my compositions, instead of trials on the venerable Phase Vocoder paradigm that I've tried on few of my works before.
Fast forwarding onto 2023, a number of days ago thought it's time to dig into Fourier again. Through the years I had been fascinated with Charles Dodge's “Speech Songs”, as well as Paul Lansky's “Idle Chatter” pieces all using LPC. Periodically showed then to my students as examples of cutting edge computer music compositions, and as proof that these methods work. Further, I was scratching my head after bumping into some magazine articles flaunting on LPC, and how is being productively and efficiently used on speech synthesis for devices and gadgets that talk. Later I would found out that LPC is the technology behind VoIP. Not surprising after all these years, still a current affair. Though not sure if it still finds a niche on music endeavors of composers and laptop performers. Reason enough to bring back ideas and thoughts of all research done before.
Research and compositional applications of LPC were popular on the second half of the 1970's and on the eighties, explicitly on circles close to Bell Labs and Princeton in New Jersey, and to people doing speech synthesis at the time. LPC also viewed as a data reduction technique, for music purpose is a substractive synthesis technique. Named as such because prediction of pitch (or rather frequencies), depends on past outputs (e.g. delays). The average of a value from a past sample (or samples) to the actual present sample is a filter for a new resulting signal. Thereby, analyzing a voiced (or instrumental) signal will engender formant regions bounded by all-pole filters, pretty much like in a band-pass filter.
On the prediction of samples with pitched sounds, results can be straight forward but most sounds enfold a nonlinear excitation part made of noise prone to error while predicting the next output value. Consequently and to be short on this matter, while using LPC as a music tool, when the analysis of a syllable or a musical tone is being done, there are a voiced and unvoiced decisions harvesting a pitched signal and a residual. This process known as “analysis procedure” supplies data that can be further used for re-synthesis of the original sound. In a nutshell re-synthesis means, a wave train is triggered and passed through a filter-bank using coefficients from the analysis, and subsequently mixed to a residual noised sound (also from the analysis) serving as excitation or even as an amplitude envelope (depending on the sound). Thus, a LPC resynthesis algorithm bears two components, the actual pitched analysis and the excitation or residual component.
In theory if we do a re-synthesis of the analysis part (i.e. pitched or voiced) and the residual, resulting sounds should be closed to the original. What makes it so appealing is that we can dilate a signal, or even change pitch, so that a female-voice sounds like a male-voice. Being more innovative, cross-synthesis between voiced sounds and other kinds of sounds can also be done. Worth mentioning, Linear Predictive Coding determines characteristics of the vocal tract. Just as the vocal tract changes characteristics during the course of speech, filter response changes from segment to segment. In practice filter response while using LPC continuously changes during the duration of a vocal sound. Even though, LPC analysis is not only restricted to spoken words. It can also be applied to a wide variety of sounds however, better results come out of pitched sounds.
Cross synthesis refers to techniques that come from analysis of a couple of sounds and use the characteristics of one sound to modify the characteristics of another sound, usually involving spectrum transformations. LPC Cross synthesis takes the excitation from one source, sound (pitch and event timing), and drives a time-varying spectral envelope derived from the other source. For instance, one can replace the simple pulse-train signal used to create voiced speech by another complex (maybe instrumental) waveform, such as the sound of a flute resulting further in a “talking flute”. The above is widely described on Computer Music texts such as (Dodge and Jerse, 1985) , (Roads, 1996) , (Loy, 2007), among others.
Voice textured sounds are related to lyricism meaning expression in imaginative ways, including mood. Some call it lyrics et adding drama to a song. Have used words (spoken and sung) on several compositions this way. No matter what, spoken language adds a layer of expression to desire in a composition. Otherwise, music textures underline lyrics in a song. Whatever the situation, vocal sounds have always been part of musical expression. But do words need to be legible?, do they have to show some sort of semantics?. Chant seems more like lines sounding in a way that cannot be expressed other wise.
Been experimenting using LPC to get voice textured sounds that hopefully will be useful in one or several compositions. Some sounds carry on a characteristic digital spark, but other are discoveries, eminently while advantaging from cross synthesis. As principle, it is relevant to avoid literal instrumental pitched sounds on purpose, and thus LPC serves on. Further, the ability to make spoken words -sing-, adds to the lyrical component in a musical phrase. Not to mention, changing pitch in order to harmonize, and furthermore elongating a sound by manipulating its textures, supplies degrees of freedom and dynamic range. Like they say stay tuned for new compositions using implications of the Fourier Transform.
[] Dodge, C. and Jerse, T.(1985). Computer Music, Schirmer Books, New York, NY. pp. 6:201-204.
[] Roads, C. (1996). Computer Music Tutorial, MIT Press, pp. 5:201-209.
[] Loy, G. (2017). Musicmathics Vol. 2, MIT Press, pp. 9:411-17.
[ Wed Dec 6 08:23:01 PM PST 2023 ]
Tens of thousands permutations from a few of Tone Rows
Here we are, astound with thousands of permutations resulting from few tone rows for breeding next piece(s). It appears that choosing one is far from being trivial decision making. Here choices involve aspects that swirl from pragmatic methodologies to immeasurable complexion on aesthetics and imagination. But how did we got here?. It looks like as humans we need choice. Besides mating and cues for survival, just like in nature's paradigms, quantity is substantial. Seeds on the wild or hundreds of sperms thereabout, but only one fuses with an egg. Consequently, creatures of all sorts don't fixate on just a choice. Survival depends on looking around. However, we derive prejudice along intuition helping out on bargaining, craving for decisions. But implicitly don't necessarily know what is right, therefore «hope» is pivotal for making the best choice for what is to come.
There is symmetry on forms in nature that contour semantics about perception of things that just look alike, and patterns cased from a single form or shape. This is instrumental on capturing surroundings that are implanted on brains that assist supporting judgment and prejudice. Like being on the fall, in a forest, and trying to focus on which one is the best «birdsong». Even so, realizing all of birdsongs sound good to the ears whether they convey a message, meaningful or not. As a consequence, like birds using the same language, it might be inferred that our thousands of tone permutations can also share something resembling a genetic code or kernel. Hence, if we context composition as a plan for a lab inquiry, outcomes of testing combinations, supply clues useful while creating an instance of an element in a piece.
In laboratory situations (music-lab) we end up structuring candidates by literally tying knots between elements consisting of a permutation of a tone row. There, we also have combinations that contrast along each permutation. Several combinations chart a timeline consisting of a series of events with gain and attenuation. A test objective for this kind of experimentation lies down along grammar and semantics of the above combinations so that the timeline might convey meaning or not. Thereupon, how we choose each tone ends up relegated to a random process, making probabilities for a good success. Other domains outline this process as a «lottery», no matter what the outcome is. All things considered above, composition by means of thousands of combinations from few tone rows resemble a gambling activity, hoping on getting the best outcome.
Whether permutations of tone rows add entropy to a system of composition or not, we feel fearful that from now on, compositions will come up sounding the same. Stepping on argumentation just elaborated, it appears that an ecological path (e.g. nature's path), is a reasonable direction worth following. Hereupon, should try using of as many of tone row permutations as possible. With a romantic perspective, every one deserves a chance, otherwise why we got it at first. Recall some musicological arguments on similarities among Haydn and Mozart. Moreover, attributes on at least a dozen of Mozart's pieces might portray the same work, though, to our ears sound like different pieces. Following the academical remark, “these pieces carry same DNA”, (i.e. follow a similar perhaps symmetric pattern).
After all, combinations are scratch material and not so garbage, standing a chance for contouring a timeline thus, tying knots for a composition, given object, context and outline. Here hoping we are making use of reporter's simile device to characterize several things and showing equivalent aspects in a paragraph. If an answer to -how did we got here- is not so convincing, let it be said that software is to be blamed. Several programs using Scheme, Lisp and MatLab, have been written aiming towards ``algorithmic composition'' but ending up on statistical domains. Like they say, ``it's all data nowadays''.
[ Sat Sep 30 09:05:10 AM PDT 2023 ]
What about those instrumental sounds. Still using them?
Fascination on instrumental sounds for composition remains on my quests. One reason being the fact that an instrument implies 'Live Performance' of music and thus more than hundred millimeters apart from sound art. On my search for a musical language either by-means-of or -mediated- by a a machine, ended up gearing towards a sub-subject known as “Physical Models”, hoping that one day would be a category of composition by itself. Though, it never happened inasmuch as revolving definitions of the digital paired to whatever novel technology would be. Aside from “ChatGPT” statements about physical modeling, our understanding was that of a parameterized description of acoustical phenomena inside an instrument in the form of differential equations, usually transcribed in an algorithm or MatLab code. We made measurements as to how sound was produced or manipulated using an instrument. Numbers gave constants and variables, further functioning as parameters and thresholds (thereby balancing the wave equation), so that timbre in a model was as accurate as it could be, or rather “closest thing to a real instrument”.
Wrote conference papers on the subject, submitting them on purpose, knowing that the acceptance rate was rather low, since few people on music fields either knew or rather lacked understanding of what we meant on our hypotheses. A constant observation response from peer reviewers was often on the lines of -why using an approach with models when a real instrument could always be used-. However, never gave up and persistence proved worthy on many ways. Signal processing techniques for Physical Models have always been challenging and by themselves motivation for keep going. As absurd as might seem, toggling parameters to unusual and unstable values translate into inquiries as to how a virtual sound could be. Take for instance if we keep increasing pressure parameters on a plucked string aiming at infinite vibrations as time goes by. In case of a real string, it might brake apart while on the model we keep trying until seemingly possible. Meanwhile, there is a mystique sprouting from craving on, any sound could be possible by means of analog and digital electronic sounds brought to us by inquiries such as those of Jean Claude Risset and Pierre Ruiz on the golden days of Music-V and Bell Labs in New Jersey.
At the time (twenty+ years ago), when for some of us -models- were a hot topic, most people focused on real time “live electronics” and human computer interfaces (HCI), instead of delving into signals, and though some gave the excuse of “ I'm not a math person”, reality unveiled our idealism by highlighting that manipulating algorithms on computer screens were far from being instrumental techniques. Normally, performed compositions are chains of simpler issues incarnating into gestures and events. Even the best graphical user interfaces (GUIs) such as those on NeXT computers were tricky for most performers. Worth saying getting around these issues dispensed “live coding” as an electronic music instrumental technique widely scattered nowadays. But many of my colleagues at the time narrowed on new musical interfaces (aka NIMES), preferably centering on physical interaction instead of software and programming. To say the least a paradigm following this course was Max Mathews et al. “Radio Baton” among others, substantiating why this path was favored by most new music practitioners. Nevertheless, several physical models shined out from obscurity by appropriating these NIMES. As illustration, a Radio Baton was used for controlling the model of “Scanned Synthesis”. Further on, recall a pressure wind controller used for performing on the physical model of a flute.
While on topic, should express guilt because a response to Gary Scavone, when asked about instrumental sounds I got from the Synthesis Toolkit (STK) for my piece “SanSounds”. He had just listened this piece in a concert at the old CCRMA ballroom. Regretting that answer because I characterized those 'brassy' sounds not useful because they don't sound cool. Quite pedantic from my part, and a not a nice way of saying that getting a good sound out of STK was not so user-friendly. Although, I was barely admitting (perhaps hiding) little knowledge on signal processing and even the Fourier Transform at the time. But my guilt through the years triggered further inquiry into acoustics of musical instruments and parameterize. STK models have been a springboard in a lot of my research, meaning that made my own versions of STK code for my own purposes and enjoyment. Even tackling C++ for STK, which through the time still remains as the quintessential environment for physical modeling. Gary and I had awesome productive talks and empathy geared with a vision of how music technology should influence composition and performance. Needless saying, after the Ballroom concert he further listened to other of my works always using keen ears and providing worth comments and critique. Dr. Scavone still maintains STK from McGill where he directs the Computational Acoustic Modeling Laboratory (CAML). A lot of this code has been ported into ChucK benefiting from its real time live performance capabilities.
Used to be a brass player but in my own compositions I have been avoiding this sound or related timbre. Seems that currently I devise structures with sounding mixtures around, so that one day I would make a live performance with them; who knows?. Perhaps a contemporary sound of a Mariachi Ensemble?, maybe. Most of my compositions involve physical modeling of instruments concentrating on bowed sounds fabrications. But do they sound electronic, electro-acoustic, concrète, or the like?. Probably not. Likewise, are these sounds pitched?. Probably yes. If pitch was a constraint for composition don't find it anymore, considering there are alternatives to tuning including micro tuning. Physical Models can be tuned to any desirable way. But pitch depends on perception and prejudice. As objectionable as it might seem, compositions are not categorized by their soundings and textures. Instead they now seem to be measured by features in performances they evoke. For good or bad, pieces using modeling of instruments might characterize conjectures of known instruments performed by traditional or alternate practices. Hence, quests here carry on into persuasion, and how to make the listener aware of a non so trivial composition, thereby getting some air for instigating there's still room to research on acoustical models of sounds and performance of instrumental sounds.
On this enchantment for Physical Models in composition and performance, can't help feeling inducement into a sphere paralleling a traditional conception of what a piece is conformed by constantly envisioning how the “new” is reinstated. Why have used and still using this method of coming by tonalities in sounds that would not emanate in other possible way?. For our purposes this is something that is summarized by expressing that we found these sounds interesting, and to a great extend, we came to a point where we know how to manipulate them, so that they are flexible means of expressions. For certain we will not be using models on a performance or rendition of Wagner's Lohengrin. Even Miles' Blue in Green. On this ground we would certainly tune a Piano to the 13-note Bohlen-Pierce scale for listening to its hexachord permutations.
Compositions using modeling of instrumental sounds can be listened [here]. More on the subject: Chafe, C., Case Studies of Physical Models in Music Composition [here].
[ Fri Jun 16 04:46:02 PM PDT 2023 ]
Speaking of machine thinking!...
Following my grain-of-salt on a thread apropos of current hip on Artificial Intelligence but yearning about predictions made on the Computer Music Journal, and periodical blather on AI with colleagues for more than twenty five years now.
Just hoping "chatGPT" doesn't become doesn't become a Rosetta stone for
our age, our civilization.
" Here struggling if I should write my comment using chatGPT "
But for what might be worth, I'll be giving my brain another shot.
The fact that getting a response or rather 'suggestion' while using GPT requires implicit human intervention, outlines some form of 'intelligence' which reminds of Chomskyan unrest that had surfaced from time to time. Furthermore, all signs point this kind of interaction requires cognition to a great degree.
Pointing here that human intelligence matures through the years and though, GPT can also be trained as time goes by, physiology of the body disengages the brain from the system. Take for instance our suspicion of taste.
For the non so skeptical taste is part of the body. Recall conversations on the old CCRMA trailer commenting on how reactions to taste excite chemical reactions throughout the body. However, not limited to food these 'intuitions' transcend onto how art and music intricacies are measured and validated. Philosophical aspects should also be taken into account because hermeneutical aspects on how we depict illustrations of an image we have from a song or a painting. Therefore taste implies embodiment of something we are perceiving.
That we listen to Steele Dan, and for that matter to Miles Davis and Brahms is function on how we interpret and make sense of an object or a signal. Needless to say 'intelligence', and a lot of the above, has been pointed by our own Dan Levitin on several of his writings and talks, in case science is needed to make sense of it. That we listen to a song over and over might be chemistry on our bodies, philosophy on the mind, or, yeah right-, a neural-net!.
While reading these messages can stop thinking on Chowning's annotation that music was not all about sound. "Touch plays a huge part in music". Not so candid, remembering these comments can't help scratching on another layer of intelligent cognition. What about the role of mystery?, meaning that, "there is always something else between the lines".
Growing up using dictionaries, the Britannica and later reference cards on libraries until gwwgle came up. First time I searched for a text pattern using gwwfle on firefox was on CMN11, at least sixteen years ago, having Norvig's book on the desk. Have to admit I was dubious on its strength. However gwwgle likewise on Firefox, pencil, dictionaries and thesaurus still around. Have not opened the Chicago Manual of Style in a while.
I envision misuse of ChatGPT on several domains. Just naming a few, on the writing of legal contracts, magazine articles, CVs, jokes and obituaries. Though, commendable while scripting 'Westworld' like episodes.
[ Sat Mar 11 07:54:45 AM PDT 2023 ]
Sonority seldom arises from ambiguous Pitch: On the use of pitch after Electroacoustic premonitions.
On acquittance for the use of pitch compositions such as Madeira, ReadingT, among others.
Lectures on the aesthetics of «Musique Concrète», and in fact on much of electroacoustic music composition on the eighties and nineties outlined that pitch was a distraction on the perception of sound. In a piece for flute or violin and live electronics, other than pitched tones, harmonics and the like were usually referred as 'instrumental effects' drawing a threshold between note and sound. In fact electronics on the form of signal processing were also labeled as amplifiers of effects. We were taught «Ring Modulation» is a process function of pitch and frequencies that acted somehow as a procedure for transposing pitched tones onto their complement non-pitched sounds, cluing non-pitched clanked percussion-like sounds. But after more than a couple of decades -on the age of musical information retrieval (MIR)-, seems that pitch is not a distraction of sound any longer. Ambiguity of pitch still disposes a tone for a musical sound.
Some now argue that sound is not an objective on new musics anymore, because of conning in a state of the art rather focused on new instruments and interfaces. On this quest there are plenty of efforts and struggling on getting ``the-right-pitch'' by the ``right sound'', on HCI courses at MIT, Georgia Tech and Stanford, just naming a few. Here a right-pitch poses attributes en route to a tone-like sound. From a perceptual perspective not tuning of pitch is propitious to noise and ambiguity, thus leading back and reading the opposite, whereas sound might be a delusion of tone. -And there we thought we would get listener's attention away from melody and harmonic progressions-. Albeit no blaming game for getting into tuning trap, new instruments imply live performance and as much as we try to avoid 'tuned frequencies' instruments are function of pitch.
Recall being seated in a circle on the blue chairs at CCRMA's old ballroom discussing pitch and resonant modes of membranes when John Pierce showed up unexpectedly to join us in our conversation. He kept silenced and listened to our points most of the time, even so, his presence coerced ideas because of our presumption on what he had coined as «musical sound» with Max Mathews years before. Among topics, ``membranes are tuned according to their size, density, material and maybe others''. We can take the Fourier transform to get its partials but not so curious minds usually focus on the time domain, unless notions of spectralism circumvent a creative mind. On such a case partials of a membrane-like sounds are taken for getting clusters, extended chords, orchestration and even harmonic o in-harmonic development of a piece. Did not get hypotheses or theoretical conclusions out of that gathering, but a lot of the points cleared a scheme by connecting dots on what was meant as a sound portraying music auspiciousness. Incidentally, a common denominator on how combinations of pitches spoor sound development was encompassed by all of us that afternoon.
It might be that after this moment our unconscious headed onto micro tuning, and particularly onto the thirteen note Bohlen-Pierce scale. Due to phase and tangled intricacies, pitch combinations change resonant modes in an instrument, hereto generating resonances that create sonority and textures. But the use of pitch might constrain listening, nonetheless sequences of pitches are perceived as melodic sets, concepts, themes and variations. On composition contours where sound is an aim, overlapping of tones is catchier and clusters of tone progressions direct features into -concept- while casting a piece. Further on, a sequence of pitch sets into clusters labels identity in a piece. Stocked with a trend heading into live performance and instruments on the stage, the use of pitch is unavoidable. However, chasing after the musical sound as «new sound» should be reachable while working using frequencies, partials and harmonics. Though, should be explicitly acknowledged that the ``musical sound'' is an individual and personal notion. Means for attaining a ``great tone'' should always be explored, and questing means further steps that certainly would widen creativity and expectations on the production of new works.
[ Sat Dec 10 03:12:41 PM PST 2022 ]
Physical Modeling still on the scope, challenging and a trail to Acoustics
Couple of days ago while holding Stanford Magazine summer issue on 'randomness', felt elated numbers still matter on Alma Matters and surface beyond shuffling, polls and slot machine beating. Questions as to {``which trail to take?''}, remain vividly on research and development, design, politics just naming a few. Paramount on artists' minds, uncertainty is pivot inasmuch as taking paths that conduct to sources triggering a creative mind. Further on, indeterminate outcome seldom arises as our adopted ``Quest for Computer Music'', illuminates winding roads to expectations. Given tools and heuristics sprouted over the years, complete and thorough answers have surfaced as computer solutions to what we want hastening music to or from a machine. Unknowns present on issues like, ``how will this performance sound?'' twist common sense and exactness of other human fields. Thus as practitioners of an art, we get hints and hope on normal curves, stochastic paths and probabilistic values pointing to mindful trails. Albeit, widely known that tossing a dice falls short on creating a melodic line. -At some point there has to be human intervention-.
As to the colloquial promise of facing new music technologies, and allegations from acute critics of music eagerly posting that efforts on synthesizing new sounds have so far resulted in just only one timbre, namely ``Electronic''.} Wish efforts would surpass conventional thinking and myths. A superfluous guess of this kind permeates lack of willingness to open ears as shown in a majority of listeners and misunderstandings of technological advance. Side wise we concede that a composer nowadays would dare whether a listener is willing to listen to the actual sound of a real instrument or rather 'stick to the mix'. Little that people know steepness of slopes to accomplish a synthesized sound bearing natural acoustical qualities and resembling actual instruments or phenomena. For more than fifty years digging on electronic and acoustical means has been focused on the above and farther on. First we wanted to replace the orchestra, then just a violin, a flute or a trumpet. But not surprisingly dead ends have lighted otherwise random paths forging new music and trails to a better understanding of acoustics.
A physical model assures to a degree a mimic of the sound of an instrument, and a possibility of an extension, all in the form of an algorithm. Here we mean a white-box metaphor in contrast to the {black-box} paradigm found on commercial synthesizers and miscellaneous hardware. Given a definition such as 'Parametric Modeling' of acoustical phenomena nominally as ``balancing the wave equation'' for instances such as instruments, rooms, spaces, environments, turbine engines, among many, these models exhibit not only physical quantities and variables but also means for manipulation of a traveling waves as characteristics of a sound being modeled. Have to admit that never been skeptical of this method like most of my colleagues, inasmuch as some sort of faith that physical modeling works on composition like nothing else on real or virtual worlds. Take for instance a Maraca with thousands and thousands of seeds, or a Piano tempered using scales apart from diatonic.
On our conscious on these models, there is resemblance to the real instrument but not quite. Call it perception and even perfection, while we get the right tone on one or several notes, when all features are grouped, we find deviations in contrast to the acoustical and mechanical instrument. On getting as realistic as possible, we find nuances which make things challenging. On an optimistic viewpoint timbre here is almost never electronic. Combination of parameter values give comparisons as to how far or how close we are on fine tuning the model. But if we leave the domain of physics to music, there might be discord as to how do we perform with a model. This is where the approach would seem discouraging but on the contrary and not from afar here is where issues seem inventive.
Given deadlines and composition commitments, most of us don't want to spend time experimenting how to parameterize a model of a sounding device while we can still use an authentic and real one. Nevertheless if more justification is needed on the above points of view, should say that as part of childhood memories models on contexts not limited to architecture but to engines, trains and airplanes were frequent at reach. Was close to aviation because close friends of family engaged on the dawn of aviation on the Caribbean at beginnings of twentieth century. Thereby eavesdropping about experiments on propeller engines and jet-propulsion was substantiated by examples which were not that fictional. Thus, a miniature model of an engine or in fact any device generating a process seemed intrinsic for understanding machines around. Those were the days of cardboard, plastic and tinkering sets. Call it prototyping, development of ideas, but keep on mind everything was done by hand. Consequently, model planes were assembled and either flew or not.
Many said this was sort of ``day dreaming'' ideas that will never materialize. But day came when I also got on my hands an article from Scientific American referencing computer modeling (circa 1980's). In those days if we got our hands on a machine a computer, interactions were synonymous to payroll, receivables and accounts payable. On that article, in front of my eyes, there was pseudo code, and actual Fortran code of a program modeling heat dispersion in a combustion engine (not that I understood heat dispersion process). Recall author's premise on this article,``In an algorithm, there can be parametric models to test anything imaginable, and means to push these models' boundary conditions to its limits, free of wasting physical resources''. Given this premonition a constraint was surging, Fortran compilers were only available on mainframe computers and all bureaucracy surrounding them. However, -held on that thought,- until the advent of NeXT machines.
Always a believer that music technology enliven everyday gadgets and more meaning that a lot of work on subjects mentioned here is seldom discarded. If vacuum tubes and solid state traced paths on communications and information theory they also arouse Moog and Arp synthesizers and became emblematic on music production instances. Technology is a window framing anything as new, possibly better than before. If synthesizers were not good enough then there would be samplers, and so on. Corroborating our insight about modeling, worth saying and fortuitous, we were at an AES show when a company called ``EMu'' showcased a prototype of a digital sampling instrument. At site, of course skeptics complained as to why sampling a real instrument while actual instruments can always do the job. But to these days nobody can deny business success of most Emulators particularly those on the business of incidental music soundtracks of movies. To a great extend for several composers has been an orchestra at their fingertips. But a sampled sound is only a recording or a chunk of a partial fixed segment of a sound sampled at a timed interval. Industries profit from technologies because venturous minds harvest them far from anomalous thinking on computer modeling with applications to music.
Yet, seems easier to teach {FM synthesis}, Ring Modulation and other myriad of audio synthesis methods, in the meantime physical modeling remains a challenge and is avoided on most sound synthesis curricula. Reason being, we are modeling acoustics using math and physics for excitation, traveling waves and resonant bodies. For a daring mind this demands time and testing as in a laboratory situation. Even at the apogee of sound in physical models, the Karplus-Strong model for a plucked string still poses theory and knowledge because initial conditions and boundary conditions should be known beforehand. However, for those benefiting from the industry, black-box hardware has made it into models in the form of software plugins. People use them lacking knowledge that they are still based on computer models of an electronic circuit!.
On this essence should say, we've used above approach to instrumentation for music contours on pieces envisioned using models such as plucked string, flute, bowed string, scanned synthesis, banded wave-guides, maracas, cabasas, Leslie cabinet, piano, rooms and spaces, among others. Research on the subject has been pursued by colleagues although have tackled implementations on most of them. To say the least, working on a physical model remains a work in progress because there are always findings that improve the algorithm. Needless saying delving on this subject matter also throw changes of direction sometimes on the right track, sometimes discouraging. Even so with all trials and tribulations there is always a finding that seldom gets discarded. An example of modeling acoustics for sound source motion can be found on usingLissajous Figures to trace paths for sound motion [HERE].
[ Sat Oct 22 04:47:23 PM PDT 2022 ]
Segmentation && Pattern Classification
Working on a piece named ``Madeira'' for pitched percussion and telematics over an ``Open Score.'' However, on research related to this composition, and on the issue of embodiment of computer music styles of music, as usual past ideas on the line to be forgotten suddenly are again relevant. For those of us who through the years were able to attend CCRMA's Hearing Seminary under the baton of Malcolm Slaney, there's always ideas stocked on the mind usually discussed on the venue on one or several sessions. Even so, outstanding to say the least, was Dick Duda's session on HRTFs, barely known at the time twenty years ago. Little that we thought that they will become mainstream on hearing aids and now on headphones binaural listening. But Prof. Duda's ideas were not only constraint to this subject and instead they are part of issues related to signal processing and localization.
Still resonating!, recall concepts related to several lectures
we had with him on the subject of pattern recognition and signal
processing for sensors on the topic for the HCI class. A lot of
fad been around on matters connected but in particular because
of machine listening, searching on the web, and data mining,
among others which certainly overshadow original thoughts. Seems
necessary to outline that ``Pattern Classification''
still a human process closely related to a person's every day
perception on all human senses and further into
language. Remembering Prof. Duda's lecture with some notes
pertaining to ``Segmentation'' and ``Pattern
Recognition.'' In other words,
A segment of a signal is taken and then further
compared with the rest of the signal to see if matches are
found, thereby producing repetitions and consequently a
pattern of some kind.
In general, it can be said that if we find a pattern, behavior in a signal or a process can be additionally predicted.
On his lecture some clues were portrayed: pattern recognition helps on reducing and simplifying periodic and repetitive processes. For instance, take an audio signal and see if it can be reduced to a single period (a period is a segment). If we segment the signal we can predict what follows. Two or several segments can be compared to see if there is a pattern. On symmetric visual images this is easily depicted by isolating a seed or a slice in an object. By replicating a slice on a space, the rest can be extrapolated. Examples of these are intrinsic on Islamic art, tiling, and wall papering. Thus, it is reasonable to think of segmentation as a qualitative process describing features of a process embedded in a pattern. An ingenious brain might take this as a data reduction tool. But on methods such as 'circle of fifths' progression, it might be an opportunity of creation by means of arranging patterns on a time line.
Patterns abound all around. Looks like we get used to polls so easy when they tell us there are a tendencies for predicting future based on past samples. But more on our pragmatic side of issues, and coming to tools and research for conceiving a new composition, we can see that on ``Open Scores'', we are telling performers just-a-bit (a segment or features) of what a performance might be. To our expectations, one hopes for embodying a gesture in a piece where performers take a segment for further creation of patterns and expression. Symbols on an open score are indications for generating figures belonging to musical sketching on the assumption that joining segments cast form, just like repetitions of a single geometric line producing different shapes and objects, all product of a human mind.
\textit{Pattern Classification and Segmentation} can get too technical, but still worth noticing that this is a recurrent human activity on every day's life that can be a source for ingenuity. Been dealing with signals for several weeks now and as practice tell us, there is always redundant data as info which can be simplified. On a side note need to comment that seems that GCC versions on Linux are coming too often.
[ Mon Jun 20 04:37:57 PM PDT 2022 ]
Jon Appleton: An inflection point
This moment looks suitable to reminisce about other times since Jon Appleton's passing several weeks ago is bringing back memories not so forgotten. Normally we try telling and cluing what is being done days, weeks or even couple of months before. However and given the occasion, we are relaying any sort of technical report, descriptions summaries or the like for next entry even though hinting we've been revisiting cool hacks on abandoned software for composing means. Hitherto asserting that Appleton was an inflection point on a composition career is not an understatement.Through the years contact between us was not as close and direct but still he remained influential on many respects.
I first found about Jon Appleton on the nineties while teaching a course on Electroacoustic music history and aesthetics, because of his book “The Development and Practice of Electronic Music”, Prentice-Hall, Inc., 1975. Later, I saw him in person at the San Jose State computer music conference complaining about poor and mediocre functionality of Apple's sound manager. -At the time Macintoshes were not so suitable for music or sound processing (only MIDI) until Apple acquired NeXT Computer and its sound legacy.- He was there with some of his students and composers from Dartmouth. Next year to my surprise, found out he was being considered among guest composers for the Contemporary Music Festival at Bogota on which I was part of the board of advisers and on the organizing committee. Luckily Jon was able to fund himself his trip to Colombia and to be part of an instance of this now historic festival. Given the shot, spent time with him sharing concepts and certainly arguing on his viewpoints and «concerns» about the state of the art of Computer Music at the time.
A birdseye view of Appleton's issues on new music had been published one the Computer Music Journal: "Live and in Concert: Composer/Performer Views of Real-Time Performance Systems," Computer Music Journal, Vol. 8, No. 1, 1984., and "Science in the Service of Music; Music in the Service of Science," Computer Music Journal, Vol 16, No. 3, 1992. These points were outlined on his talks at the festival permitting arguments and controversy. Part of his illustrations included performing on the “pizza-box” radio-baton to show how «real-time» live performances would take center stage on the evolution of computer music systems. Of course, he was demonstrating Max Mathews' “conductor” program while performing a composition of himself. Even though at the time (and still now), he is known because of his role while developing the “Synclavier” which in those days was the most outstanding instrument after the electric guitar, he was fond of performing with a radio baton. -Savvy people in the audience wanted more answers on the Synclavier and fewer on the radio-baton-. Nonetheless, corroborating his ideas and the fact that I knew that Max and Charles Dodge were also regulars at Dartmouth's electronic music studio and seminars, having a radio-baton in Bogota was something somehow historic.
Jon didn't waste time arguing that in music getting the right notes was not objective. Instead, and in tandem with Max's and Dodge's ideas, right gestures were more convincing while listening to a performance and thereby: “the conductor program”. Questions such as how can a computer interact in a live performance with a soprano?, scattered better with him rather than sound synthesis tricks. Though, need to acknowledge that I dared asking about sound and likewise that I had hardly been exposed to Jean Claude Risset's catalog of computer synthesized sounds before these talks. There it was when I got a response on the role of synthesis.
On perspective after almost three decades now, questions dealing with real time “live” interaction alongside computing systems seems barely pertinent. We've came to an age when even “Computer Music” jargon sounds demode. Recall James Beauchamp twenty years ago hinting that this term will dilute into one or several music or art forms as time passes. But ten years ago confabulating on the future of this branch of new music, on a related conference, I informally underlined the notion that in spite of evolution and changing of times, a mere definition of “Computer Music” is always at change. At the moment colleagues agreed and came off caressing the idea. But for good-or-bad practice of music with a machine keeps going, and following Appleton's concerns new issues are on the rise.
Can't deny evolution because of new instruments, new interfaces and on the whole, novel ways of performing and listening are floating all around. Parlance along frameworks within “digital music” might seem overcrowded giving air to speculation along subjective meanings because “the digital” encompasses generalized labels of what looks new and just out-of-the-box. Meaning that anything mediated is a byproduct of an informatics system. But musically speaking not necessarily so. Appleton and other colleagues of his time alluded that music was made for humans by humans, overextending that music made with computers is a consequence of human touch.
A great concern nowadays relates to choice for listening, -music consumption for some-. Paraphrasing Jon on “what is up there?”, -on the cloud-, interesting and appealing to the ear wherefore worth listening?, it appears that choice is harder than ever because of big-data and huge piles of sound files or otherwise digital music on servers around the world. In other words, meaningful music retrieval is posing various constraints. Pattern matching classification gets a choice closer to what is desired but a listener's mind is what counts at the end. Worth mentioning are distinctions among the practice of listening and conceiving a musical piece. Thus, seems imperative pointing out that digital music might as well be stored energy of expressions of any sort producing impulses on the ears thereby deciphered and framed as a piece, song, or composition. This in contrast to the practice of what we have been referring as “computer music” framed rather on composition and performance. Consequently there is a distinction between “digital music” more related to media and media is the message, and the subject of “Computer Music”.
Furthermore, there are digital sound files conceived by algorithms developed by people appealing to computer techniques also on the web. These can be renditions of a live performance or simply a recording of a so called “tape music piece”. This rises questions about pertinence of live-performance concert situations versus digital renditions of the same music. Are listeners biased into one form or the other?. Tradition has told us of a concert as a ritual where people will listen to performers and in some cases loudspeakers acting as performers. Aware of media being ubiquitous, a ritual is just a subset on how music is presented to a subject these days. Therefore knowing that in most cases a piece would be listened on a smart phone, gaming console or a multi-media system, in lieu of a concert situation, getting a listener focus on a computer music piece on these days poses a huge concern.
Questions respecting the practice of music by computer keep coming. We are grateful to our mentors and figures like Appleton to keep our minds active and reluctant to accept just what is there. Being the case, music involvement is substantial to performers and composers alike to the extend that in situations listeners become part of a piece. “Embodiment” of music stored in a file is like matter embedded in a real time concert situation, otherwise whatever a digital format is, only meaningless data is stored as bits and bytes. How this embodiment makes sense to a listener is consequential on getting a human involved in active listening. Here we want hearing to unfold meaning and sense in addition to being interesting to a mind. Hence sounds possible that quests dealing around “music embodiment” show hope on the issues of «listening choice».
For several years now, many people have conceived media pieces without live instruments or just alluding them. General audiences accept this practice appealing because media over loudspeakers and monitors is the norm, in addition to fashion and pop culture. In cases a pop concert is just an over amplification of a sound rendering on smart phone. To this sort of misinterpretation enhancing compositions counting on media attributes and dysfunctions might help on what is needed to be projected to a listener. Lessons learned on making tape music interesting to audiences should also be counted. Take for instance virtual listening and 3-D simulations or re-creations that empower liveliness in a sound object. Further, embodiment constructs mental representations of real acoustic environments as well as instruments, and by now using sound synthesis there is an optional desire for discovery.
Perception of this sort alludes a ritual in most cases. Recall music is by humans for humans and in a concert space there is transmission of signals among people. Although, not all of music substance is evoked by sound and acoustics, paraphrasing Appleton again “there are poetics on every song.” Even so, whatever the case, music remains intrinsically personal even when people retrieve it from digital archives. The body still a mediator between energy and mind thus a real human gesture triggers a body on motion so that the mind makes sense of a message. Do we need educated listeners or just listeners?.
Needless mentioning, Jon's concerns as well as those of many of his friends and colleagues whom I also met, have transcended and made music sound better through the years. This is a legacy alongside a constant redefinition of Computer Music as a heuristic for some of us. Remebering a great time in Bogota!.
[ Sat Mar 19 03:10:32 PM PST 2022 ]
Graphic Expression and Open Scores
Composer-Performer interaction can be a challenging task when issues pertaining to ``open scoring'' are on the table. Further, how an improvisation can be notated is often questionable. Performer's epexegesis on first sight and thereafter impulse to what is being seen are crucial for achieving success on a real time performance. On these days of telematic rallying everywhere and all abroad, not every instant can be captured by hand. Thereby a composition relies on the energy and state of mind of every musician part of a tele-performance. As of now, seems trivial that scores have transcended into digital scores, here taking advantage on knowledge acquired on the days of pitch tracking for machine score followers that used pattern recognition. As research and testing have shown, colleagues are even trying interactive digital scores which change accordingly, given states on an improvisation.
This leaves teeny hope on automatic transcription, sort of what we thought years ago, when we fantasized that a computer would listen to every note played on a keyboard and get every detail transcribed on paper. It is known that there are solutions to this challenge, and every now and then, somebody writes a dissertation on another incomplete approach to automatic transcription. But wherever a trail goes, seldom practice of music notation is scrutinized as a composition method. We want specifics notated because we want real time performance. Live music is uppermost on majority of listening instances, condoning a lot to be discovered and learned from ``live'' musicians on a performing event. Therefore the need for analysis and methods for encapsulating qualitative and quantity issues in regards to a ``live performance''.
Interaction, a field on itself bearing profound implications on engineering and design, presents options which help on solutions for composer needs. In this, interactions are often visualized by taking hints from graphic expression which can be further expanded or customized. For people symbols on this domain feel natural since their connections are learned at a very young age. Therefore, seems rational to use grammars embedded on graphic expression for open scores. Adaptations might be necessary but a method for composer-performer communication should not be a complicated endeavor. On this path, seems like time-based performing arts like music, take the line as a process from left-to-right when making sense from a graph.
On a greater extend, a line might be a sequence of dots as time goes by, and thereupon a nucleus for this grammar. Lines on an open score make a performer react to the semantics of what is being visualized. On a graph, lines can be horizontal or oblique; going up or down, possibly cluing crescendi or decrescendi; accelerandi or diminuendo, among time-based types of music parametrization. Several lines create densities as well as, sets of connected lines creating objects. Usually on an open score performers assign value (meaning) to these objects depending on their state of mind or to pressure resulting from an ``impulse'' while rehearsing or studying the score. A given object relates to a gesture or an expression. Should be said that above tips are far from being inventive. Procedures like this have been exhausted on scores of Morton Feldman, Stockhausen and Cage, among many others.
On this kind of composer-performer interaction, questions conform a grammar consisting on suggestions, directions and clues for connecting dots of a line on a graph in an open score. Recall that form in music is shepherded by tension and relaxation. If a line is homogeneous, nothing is happening as consequence and thus, the need for densities of any kind. Whether this language extends traditional notation or not, mixtures or combinations among traditional and the new, assist on hermeneutical interpretation of scores given this subject matter. As adjunct traditional notation becomes a subset for hinting instead of a concrete transcription of what should be played, scores become an intersection of graphic expression and conventional notation. In a nutshell, an open score is a sequence of condensed data for a performance to take place.
Tools for accomplishing graphic expression on open scoring facilitate a method that helps prototyping, thereby providing means for composer-performer interaction. Aside from vector based software and postscript utilities, heuristics using algorithmic composition go beyond MIDI in order to generate conventional notation scores. But as expressed, not everything can be notated. Use of several programs which complement tasks each other, assist on getting graphic expression on paper. Meaning that, there is no package performing all functions needed for open scores. But few come close: namely, Open Music and PWGL Nevertheless, and for our purposes, have tried Common Music Notation and Lilypond to finish everything on LaTeX. A workflow for each piece might be necessary: for example, a MIDI file containing a theme can be converted into a score with a MIDI utility. Regular expressions and AWK might be useful to edit lines on a score file. Once lean, file is taken into CMN or Lilypond for additional editing. Then, graphic expression is added. Finally, all material is typeset using LaTeX. On this chain of events other utilities might also be needed. For illustration purposes see KitiKit (a composition for viola and other sounds).
[ Mon Dec 6 03:47:13 PM PST 2021 ]
On several spaces at once, memories for present
Yesterday my nephew phoned saying that his therapist pinpointed so called, "Cabin Fever" as an usual scene on present days. Not on so shallow, seems that many around might have its symptoms. Moreover, seems that all days belong to the same season, thereby following a pattern far from a random walk for everyday quotidian and thus recurrent life. However, mental activity on passed details seems to guarantee well being for present. On this tone, need to outline that seven years have passed after we made happened the first Colombian inter-university teleconcert taking place among The Autonomous University of the Caribbean in Barranquilla (who was hosting), University of Caldas in Manizales, and Icesi University in Cali. Piece for this venue was "4.25 Orejas" or rather "BatSears" composed under the pseudonym of Christian Ramones by me.
While preparing the event, audience was skeptical of what was going to happen, not only on a technical side but also music wise. Rehearsal time was little. Got connections and multichannel audio at a good sample rate, in addition to separate visuals. All left were instrumental levels and ensemble interaction. Centric to this performance, a grand piano in Barranquilla, out of tune, -Don’t think this piano was used very often-, probably used for comping hymns such as the city’s and school’s. But for me it was a captivating white piano in a small foyer able to produce resonant clusters of every sort. Doubters of any sound coming from this instruments and tele-performance itself ended up paying attention to some extend but still disbelief any expression coming out as bytes, notes or music.
At the end all resulted in a telematic venue, historic to a great extend, happening on three different sites, at least four hundred miles apart. Worth saying, on the land of Joselito Carnaval, avant-garde improvised, telematic, live electronics were a first not well taken because lacking of strong rhythm and beat entities. Likewise, dance or theatrical components were not part of the event. Widely known, presentations of this sort post more questions than answers for most details. Hence hoping something was seeded. From each performer’s perspective, music was on the air and on wires for that matter. Need not to say, Video parts and visuals were relished and consequently better taken. Again widely known, people are more susceptible to visual discoveries. Of course visuals were also telepresent.
Coming back to present times, telematics are not an option anymore. Seems to many that it is the "status quo" for newest expressions. Hereby general audiences need to acknowledge that the frontal listening paradigm governing performances has been broken, giving way a myriad of means for listening and perception. An instance of breaking frontal concert paradigms comes to mind few weeks ago, while attending a multi-place / multi-space composition (installation for others), where sounds emanating triggered synesthesia and other gestures. Here telematics on several spaces were happening at once, and for the listener an illusion of perceiving a single piece on different spaces at once was sparked somehow. This effort buzzing on Philadelphia’s center city, on several landmark buildings, if reality bounds what is brisking inside.
For an account of what is going on our own, for the purpose of this log and, and in spite of cabin fever, a lot seems like a static bike. Nevertheless need to acknowledge that most of Solimar compositions have been recovered from aging DAT tapes and quarter inch analog tapes. Hope to have them online soon. On parallel time, issues regarding "open scores" have turned on few lights. On this matter we are now gearing toward an analogy concerning "fractalization" using macro vs. micro structural perception and methodological interpretation of how a performer might interpret these kind of scores. "Sight reading might help but a performer still need to learn a score".
[ Fri Oct 22 03:25:37 PM PDT 2021 ]
Caribbean Lives, Caribbean Insight
Remembering Alvin Toffler's Future Shock these days. This book was among first readings in English suggested by my pal Susan Sears, during early college years in the U.S. She and I used to engage in long talks over sighting society issues, while looking down the hill on Southern Pennsylvania, just north of Philadelphia. Little we knew about what was going to happen around us on issues such as a communications revolution then turning into a flatten world with people knowing each other everywhere. Toffler's idea of future (circa 1970) spoiled the notion of ``too much change in too a short period of time''. Although aimed towards technological, for many of us changes were more concerned around cultural and sociological aftermaths more than anything else. This was a melting mixture of Anglo-Saxon ideas with those of Hispanic heritage, among others. If we tackled technological aspects, it meant the Stereo Hi-Fi planet and unrest because of the wait for the ``Digital domain''.
Cultural and rather sociological because for many of us, our endeavor seemed to be too homogeneous in most part with everything given and rather tasteless. But as bothersome as it might seem, in those days (circa 1980's) recall confronting my identity and leaning toward a Caribbean heritage, not just because of my Spanish accent but because of origins and sharing oxygen at sea level with others like me. Call it spirit because of my Caribbean ties. Need to say that in those days I hanged out a lot with Venezuelans because a hunch pertaining as to what things should be, amongst, tale telling, food, dancing and certainly music. How can we forget Monday nights and the Village-Gate in Manhattan, not listening to Opera or Jazz but to Salsa. Touching on this as a preamble for a response as to why is Barranquilla , a city on the Caribbean, so important and inspirational on life and in a lot of my work. Here reiterating a link to peoples on these coasts while overcoming cultural-shocks. Not by geography, it's easy to think this sea goes all the way down to Rio de Janeiro and even beyond, meaning that all of us along these shores share something in common. Far from speculation, 'mandioca' pastry and black beans, we are talking about music and its environs.
There are ties as Hispanics, but this label is far from homogeneous. On every tale, on every situation, there is insight. Cheat chat on these coast lands doesn't exit, because on every word participated, there is fantasy and thereby, confronting reality seems quite pointless. Teasing about life is a way of living and dreaming on this side of the world. On the same token, thinking this way triggers imagination and creativity on many instances. With my feet on the ground, and to be fair to tax-payers in Barranquilla, this city on a hill just above sea level and facing one side the Magdalena river and on the other the Caribbean, was among first cities on the Latin American continent to see the industrial revolution. It bears its fame as birthplace of commercial aviation but not just that. For years a submarine telephone cable joining North and South America lands on the city and thus fruit for fiction stories. Further, its layout follows a pattern also found in Pasadena, California.
In spite of its Carnival, people commit to 'hard' challenges through the year, though always fantasizing and imagining answers very often too idealistic to materialize. Normal talk is tale telling on literary boundaries. Whatever this Caribbean spirit is, it is embedded with curiosity and imagination and a magnet forcing fictional or realistic ideas sprout, though gravitating into a dynamic fun lifestyle most of the time. Will fall short by saying that it was on this city where I first realized that music is something beyond what is heard. Recall listening to a tale teller speculating as how a song should sound. But outlining here as to why this is inspiring, insights just alike consequently trigger a sort of imagination not so constrained to creativity but also to ingenuity as well. There it was, where we saw chemistry and physics in action and where fondness for aviation and electronics also begun, among other insights.
As for Toffler's Future Shock, on a flat world and instant communication all around, a cultural shock goes on parallel though feeding from each other. An homogeneous lifestyle is not more than a pale colored picture. Facing these shocks has been a swing to blue prints in two dimensions meaning and as my Irish pals in Pennsylvania will allege, ``your mind is really looking at three dimensions or more.'' So the Caribbean, part of one's cultural shock was not more than a window and further awakening to people on a variety of latitudes. Furthermore fantasizing around common semantics because grammars and symbols seem to be universal for that matter.
[ Tue 06 Jul 2021 06:16:19 AM PDT ]
Is hardware still relevant on any lifestyle?
There we were trying to revive a Sun SparcStation. All efforts because as with other Sun hardware, this was an outrageous piece of computer equipment. Not so widely used for music, although it could do an amazing amount of signal processing when it came out. Sun didn't bother on designing digital-to-analog converters for audio, since focus was more on data crunching. Besides, it had a seamless system administration software in addition to a well implemented file server which kept happy a lot of 'sys-ad' people. Recall calling Sun, once or twice, and they would answer anything without hesitation. Though reminisces of past moments and previous concepts on what equipment and support should be, this undoubtedly makes us think on approaches about these regards these days. In fact, should we ask ourselves if there's a notion of hardware today?. All that people care today are graphic user interfaces and software layouts.
If the above seems like a complaint, may be it is. Last year after carefully following manufacturer's directions I ended up burning a motherboard of a laptop. Of course, this was not a Sun but a major manufacturer of these days. Punishment, yes!. Countless hours waiting for a call center (somewhere in the world), to answer or respond anything about my issue. Then, days weeks and months, waiting for parts due to shipping delays on these pandemic days. Hence, time went on and warranty passed its void date. Needless to say no one acknowledged responsibility and when parts showed up, charges amount the cost of a new machine. Then again, does hardware matter anymore?. For our generation when engineering was sublime, these situations are displeasing and quite frustrating.
In spite of these aggravating situations, finding time for working out issues around notation and illustration, concerning improvisation in open scores. On ``time-scrolling'' representations for a performance, should micro-structures still guide interpretations of composer's intentions? or, should visual gestures give enough information about sequence of events for a performance?. Hoping to see some light soon and in particular for algorithmic solutions that ease the job of this kind of representations.
[ Sat 14 Apr 2012 10:24:12 AM PDT ]
On the Search for a Grammar for Graphic Expression in Composition
Notation for ``New Music'' works is a loose end, and more so when composer's intentions are tied to ``Open Works'' and improvisation. Furthermore common music notation seems to limit creativity in pieces that are not necessarily instrumental or when composition requires mixed media. The quintessential question arises: should every event be notated on the score?. Ligeti's Désordre comes to mind. Here thousands and thousands of note events are written for the pianist. Likewise on few new music open scores seen recently, though we find common music notation, have seen that not everything is notated, favoring instructions and suggestions to performer(s). As usual, been on this path before but now seem to be finding more loose ends. Hence, we are looking for a grammar on the domains of graphic expression to serve as complement to the composition of sounds for a machine but farther to instrumental parts of live performers. Once again this has led on to known territories where great minds have already stepped and not so queerly those of composers of the sixties and seventies' Manhattan scene. On one side graph music of Morton Feldman, and on the other, Jim Tenney's composition recipes outlining a customized notion of harmony in contrast to orthodox guidelines of diatonic coerce. Not without passing through Terry Winograd's classic hypotheses wary on music and linguistics.
Have found that Morton Feldman's systematic procedures for visualizing sound and performance are quite enlightening, among many, because of ties to works and specifically paintings of his contemporaries on the plastic world. On Feldman's scores we can see a horizontal timeline with segments and at the rightmost end (maybe several pages after) a point signaling overall duration. Second, structures with different shapes indicating notes and sounds for an instrumental performance. Third, densities result of clustering of elements in addition to intensity indications. Organization within a perpendicular axis suggests pace and thereby pulse and beat. Here we have a composer's bird's eye view of composition and couple of simplified questions: from a composer's perspective, - how about visualizing a sound or a structure?, and from performer's viewpoint, - how a visual structure might sound?.
James Tenney's concept of traditional de-harmonization acquaints a view that a composition is conceived as a result of a search returning a set of grammar rules for a customized language unique to the harmony of the piece, being graphic, musical or sound wise. Perhaps a combination of the above.
This research and experimentation is exploring heuristics for implementing graphic expression beginning with two compositions for mixed media and instrumental parts: TikiTik adn ReadingT. Thanks to Joana Holanda for her piano performances and encouragement. These words and many more to come in memory of a great friend and colleague, my pal Francisco Iovino who passed away in New York on Thanksgiving weekend -MMXX-.
[ Sat 12 Dec 2020 03:35:53 PM PST ]
Modeled Acoustics: yes!, provided that hearing remains physical
Looks like the dusk of Loudspeakers as objects as we know, and the dawn of a genre that as well might be labeled: ``binaural art''. Listening habits are changing and music production seems to be stirring multi-path with emphasis on earbuds and headphones rather than systems of woofers and tweeters. Air pressure: where?. Further and among the sprout of 'virtual' performances over the net, looks like they seem to question laws of physics in favor of cosmetic contours that overshadow telepresence, network performances and network art. But on the above one question out stands: where is the audience?. Bet various creative minds are going over these issues, among others, these pandemic weeks while coping with a metaphor of an endless loop.
But can't stop thinking of trade-offs on stereo and quadraphonic listening in favor of head-related transfer functions used to personalize earbuds for binaural listening. Take for instance ``Lissajous'' sound source paths such as those on John Chowning's Turenas. Perception of this piece over loudspeakers differs from binaural even using convolution reverb. Should encourage people to do listening tests in order to find thresholds and boundaries of both domains (stereo vs binaural).
Over and above this, don't think at the time that the mind can be teased on the notion of ``virtual space''. Even so, need to acknowledge a new form of acoustics that appears to be forming within confines of modeled environments mostly in tandem with cognitive patterns of the brain. Same as the analogy of motion on cinema in contrast to motion in a real space (e.g. dance).
For years it's been said that most people don't go to concert halls anymore but rather listen to music on earbuds. More adventurous souls rather go to stadiums to ``see'' a circus of a musical instance where actors lip-sync to tracks of a chant. However the situation, it must be seen as through ``colored glasses'' with an artist vision: ``new medium'' implies ``new forms''. Though the concert hall and now loudspeakers seem nostalgic now, it's imperative to state that acoustics, the instrument, chorales and orchestras still belong to the physical world. Given this essence most probably the concert and the art of listening will remain as rituals and, perception of new works still a hope.
Searchings on binaural listening and testing are being carried out. In particular, trials on sound source motion using artificial reverberation, Ambisonics as well as canonical listening cues in motion such as Doppler effects and inter-aural delays for headphone listening are spurring several clues. Still some questioning remains, will there be binaural music only on earbuds? and further, are we narrowed to the static paradigm of the ``Soundscape''?.
Back on the issue of so called ``virtual performances'', it comes to mind that we've been doing telepresence and telematic performances for more than ten years now. On this subject matter need to acknowledge that latency as well as delay remain as crucial constraints ( components? ) for the existence of this form. Network speed has improved and connectivity is now widespread. Thus, let us insist on the notion of an audience as an important component on this kind of ``live'' performance. Having a teleconcert on simultaneous geographical spaces, time change, and a mixture of anthropologies, opens up paths for new music and expressions. These can only be perceived by each independent audience on different spaces. How each performer senses audience feedback from all over a teleconcert radius becomes part of a telematic composition. Recordings of these venues are just memories and testimonials of a happening in the past . Let's keep thinking on acoustics of the physical world, instruments, interfaces and the trend to new forms of art.
BTW, Reading Terminal(2020) for ``virtual'' piano sounds is a candidate for a telematic performance. Listen to its music(-)one version: [ HERE ]
[Tue 15 Sep 2020 03:19:02 PM PDT]
On Geometry and Musical Structure
These days of confinement are slow but something makes -time- perception seem shorter. Not to say that along one-thing-or-another, time falls unmanageable. However and worth mentioning, one's mind is amazingly dynamic, neurologically and apprehensively far from static. Ingenuous or creative, lots of stuff is passing through, and though, wishing could get a snapshot of every moment. Perhaps by writing down a descriptive narrative of every taste and every ingredient will suffice for moments to stay but will see. Lots of ideas have sprout, even composition and performance. Some seem within a shorter delta on social distancing but others just about at ``a right space''.
Among these moments of brain turmoil, an idea involving mechanical rotation of not so symmetric objects outshines others. This because of figure-cognition as consequence of motion, inherent velocity, momentum and inertial values. On the issue of making something move (giving an impulse) several questions arise: what if different axes of rotation are tried out?, and what about if the whole system displaces?. Picture this as sort of a platonic solid, not that regular, made of polygons suspended in space. On top of it, several spotlights facing at different perspectives that project over two or three planes or screens. Resulting shadows create two dimensional projections depending on the angles of light and their perspective. For good-or-bad figures depicted on each plane differ due to intrinsic irregularities on shape and lacking of symmetry, thereby gestating unequal variations between the object and what is seen beneath the shade. Consequently from -one- object in motion, we get several two-dimensional images which change in time. If we record these changes we might get points for a script on a timeline and perhaps variations to be used in parametric or musical scores. As a counter-idea and reinforcing this notion of movement,what about some static shapes that seem to be on an everlasting motion or at least seem stressed all the time?.
With little imagination, Oscar Niemeyer's Copan building in Sao Paulo seems to appropriate a kind perpetual vibration changing shape all the time. Similarly, let's take the case of Thomas Heatherwick's Vessel (structure) in New York, although here, a seldom periodic rotation around a non-uniform axis seems to be making this structure dance above ground. How are these examples of static motion depicted on different perspectives or planes?. Sure centers of gravity do not seem obvious forcing one to untangle their asymmetries, points and angles with almost not hope. In regards to its patterns on their projections, what is their real structure?, and, are there any beginnings or ends? or, do they converge along the original shape?. Further, where does cognition and imagination supersede perception on these forms?. To assist solving the puzzle another image comes to mind: Alexander Calder's ``Mobile'' sculptures. On them, one or several structures hold the piece together. Therefore, motion can be regarded as function of structure. Sort of the purpose of endless screws and gears. Then again, what about rotation of not so symmetric objects and its transcendence to music?.
If we use terms pertaining to a geometry of music, musical structure might be described under the meanings of a gravitational center, a rotational axis, transposition, displacement and a myriad of other variables. Using these concepts a piece can be conceived by delineating figures based on structures and spread over several planes. Elements on these structures need not be oblique, horizontal or vertical, but hopefully enough, product of reflections because of their rotational axes, momentum and patterns that can be projected on different or overlapping planes in a composition. Provided that motion is being encompassed as displacement of an object through time, namely: a moving structure. Many of music structures change through time and thus might be geometrically correlated.
In general and down-to-earth, while conceiving and composing a piece, sequences of notes have their analogy to geometry in space. From this perspective, forms such as regular polygons are at the core for creating structure in a piece. For instance, very symmetric figures like a triangle and a hexagon. But, how come?. Take three numbers (perhaps representations of notes in a triad), and map them to three points in a triangle. If these points are cycled, triangle starts rotating around its center of gravity. If these numbers are cycled through time, motion is perceived. If something is moving along a line, a parallel can be traced by outlining a structure of horizontal or vertical features -like points in the triangle-, semantically defined as a musical event through time.
A hexagon, like a hexachord, gives more combinations. Squares plus triangles arise figures that sometimes not only tease visual cognition. Hexagons and triangles also combine for more variety, given that in music hexachords, tetrachords and triads are very often used. By rules of sets and groups on symmetry, these shapes can be mirrored and inverted, in addition to transposition (when the same value is added to the points), and amplification (when the same value is multiplied to the points). A set of numbers can be notes of a triad, a tetrachord, or more, but a difference between them defines their interval value. Intervals are relationships in scales and give tuning. Therefore it can also be said, there is geometry in tuning.
Whether we want homophonic or heterophonic structures in a piece, the way we manipulate geometric points representing values on given semantics could be fruitful on creating loose and tense events in music. If applied to inversions of a triad seems too simple, venturing on sonorities, exploring sound textures, and beyond might be rewarding because of manipulation of interesting shapes and objects. If we find cumbersome perpetual motion on imaginary vibrations of forms like Calder's sculptures, Vessel in NYC, and Copan Building in Sao Paulo, let's experiment on projecting these instances of movement on screens and planes and see what kind of images and variations we get. Assign points to these patterns and rotate them around several axes. For sure there will be resemblance to the original shapes but most certainly variations on the original subject will light the dawn of a new composition. Most of these ideas have sprout while researching on Bruno Maderna's thoughts on composition and Jonathan Goldman's analysis of Pierre Boulez' Rituel.
[Tue 02 Jun 2020 03:02:08 PM PDT]
Not a desired performance and technical mishappening: Do bugs distort creativity? or do they supply alternate or parallel ideas?.
Given a concert situation of a digital work: what if a piece is performed at a wrong sampling rate (namely wrong tempo)?. What if a cord is plugged into a wrong input?. What about soldering and causing a short. ``Technology ghosts'' not present at rehearsals usually make their appearance on the actual performance. Never a law of Computer Music but always a given. More generally, a broken string while performing: a mishap? or, devil's finger to start an improvisation. Could this mean a new idea, a different version or a germination process to sprout something without premeditation.
Aside from consolation, bugs happen and stick inside a conscious mind until found. Some are benign but to a great extend, very often they bear head scratching situations. From an expression, or maybe from an aesthetic point of view, and although `bugs' are not intrinsic part of conceiving a work or a piece, one needs to get used to the fact that they circumvent. Not so surprisingly enough, they might become part of an idea. On the artist's framework (spirit), giving up because of a 'weirdness' is trashing or losing an option.
Perhaps aesthetic values needed for developing a gesture or an idea are naive from a scientific perspective and often funneled as illusions or byproducts of imagination, instead of facts. But on many given scenes, imagination triggers ingenuity either for manipulating a brain or ``to bring a kite down to earth.'' A deep slope and a skewed trail for creation consists of chaining challenging demands with unknowns, in addition to solutions for inquiries posted by imagination on the creative act. Frequently a composition results of untangling knots and the threading of hits and failures in a narrative for a timed sequence or space.
As Stanford Art's Professor Jenny Odell portraits in regards to productivity and living: Life is not always a vector. ``Things have a myriad of meanings.'' Thus, perfection doesn't need to be an objective. Perfection should be nearer to the act of accomplishment and tallied in the conscience of a creative mind. As per the above, need to acknowledge that some of my compositions still have bugs. Further, some performances have not been completely successful, and my code still buggy on several good algorithms.
Past days instead of coding and bringing down to earth imaginations or ideas for musical gestures, have been debugging code and rethinking old pieces. Some bugs have led to new ideas, but for the most part they keep on being mind-boggling situations. Have re-mixed some pieces and still working on understanding bugs and finding optimal conditions for Ambisonics reverb.
[Thu 19 Mar 2020 03:35:46 PM PDT]
Bowed String Model on Miles Davis Subtle Melodic Lines
Miles Davis subtle melodic lines provide good semantic values, and give a gamut of options to frame grammars for further composition. Several years ago, lines from Freddie the Freeloader and Seven Steps to Heaven, were the subject for a piece that ended up as a sound installation coped with visuals. From its conception this work had the Bowed String Model and Bohlen-Pierce scales as elemental features on its construction. If we want to call upon exploration and nuances of Physical Modeling, that journey ended up as a search for a new language thereby providing new symbols and options for composition.
On the other side of the token, Sound Installations, once presented, will frequently end up stocked in a drawer. A bowed string model, Bohlen-Pierce tuning, in addition to sound-space manipulation were enough reasons to re-hook cables for getting a revised concert version of the original piece, but this time not so relying on visual components. Revising a computer music piece means adapting and debugging code, however in most cases, writing more. Most notably this endeavor took us to add and perfect delay-lines for modulation effects such as vibrato and Doppler. To complicate matters, automatic connection between software applications was not supported anymore. Namely between Common-Music (CMv2) and Snd. Luckily enough, had Scheme code that still runs on Snd, but had to write SEd scripts and Emacs macros to facilitate inter-connection between these applications. However, not such a big deal.
A new and revised version of FtheF (AKA: Freddie the Friedlander) is on its rebirth. Further explorations on the bowed string model have resulted on a useful resource. From a composing perspective, recall that ``an ensemble without strings is not an orchestra'' therefore, sounds of strings are always in need. But of course, resemblances in contrast with the real instrument show up with probably unheard variations, even though there always be missing ones. Additional research for this piece, unlocked possibilities while using Bohlen-Pierce tuning. This scale also lends itself to symmetrical intervals and manipulations to achieve a variety of sonorities. Structure of the piece remains, as well as most of its melodic development and its rhythmic features but, for those familiar with the installation, this version of FtheF certainly sounds different. Thanks to Dan Altsman for his encouragement and remarks, but specially for listening to my music.
[Fri 20 Dec 2019 12:48:36 PM PST]
Starry Eyed Composer
Several years ago Bill Shottstaedt (among the pioneers of computer music) outlined the term ``starry eyed composer'', while discussing points and comparing, so called real time systems, against deferred time rendering systems. Main issue portrayed on the exchange dealt with results of rendering a piece, and how a deferred system might easily go faster than its counterpart, although sound was not being perceived. Then the question was, what are the expectations while the starry eyed composer is waiting to listen to the rendered piece?. Further, the feeling that adjustments to the piece have to be made after a first or, after various listenings of computation results, perhaps as rehearsals or experimentation results. Contrasting real time versus deferred (rendering) time systems brought a cold fact: though real time allows performance and gestures like an actual musical instrument, its opposite deals more with conceptualizing and parameterized music and sound events. In this sense, deferred composition follows a path related to ``the abstraction of timbre elements, their quantification and formalization of relationships among events.'' (Xenakis, 1971). Thereby, we can see the use of mathematics as composition tools (as applied by Boulez and Xenakis), not necessarily, ``suggesting aesthetic values or a particular mode of perception'' (ibid). Thus, the quest for mathematical functions that permit musical gestures and events.
But the issue with ``starry eyed composers'' everywhere has not changed through time. Many still believe that the machine, the computer, either in real time, or in deferred processing mode, can solve itself all composition questions. True that we are in an era of shifting paradigms, where structure goes to process, and where truth changes to approximate descriptions, but we cannot be so starry eyed, too optimistic on automatic process generation. If composition follows a path such as sculpting with clay, or with a rock, software is just like hammer and chisel. Form, shape and structure are still concepts on the imagination of a creator. -This points out states of mind while producing pieces where gestation seems over easy. Being the case, something is loose on the process, and better get appropriate screwdrivers-. Perception of music is the result of parallel streams of events which surprise, delight, frighten or bore. Emotions triggered by these events are intimate and personal experiences. Many are consequences of expectations but seldom a chance of discovery and arousal generates images or even imaginary worlds. Technical dexterity often overshadows composer's intentions and subtracts chances for real expression. Back to consequences of paradigm shifting; changes on role assumptions, product of starry eyed attitudes, can be seen pretending to switch the acts of composition and performance. Thinking counter wise, because piece conception still differs from piece performance, even if we are talking about ``open-works'', and improvisation, which still are product of ideas coming from various sorts of categories in composition practice. Here thought go for conceptualized elements that cast features for a performance of a work.
Therefore 'the quest' for acquiring elements and tools for creation remains, and in cases, becomes overwhelming. It is the mind of a composer what teaches a neural network (or DNN). Further ``the net'' supplies ideas to the composer working on a piece. Don't let the abletons mold the iron on creative goals that are to be implemented at the dawn of a new composition. The choice between following languages and styles, perhaps imitation, in contrast to genuine original ideas is not frequently relayed to machines and automation. Recall that perception is a human activity. Imitations are easier to perceive because of prejudice and expectations. Genuine ideas trigger discoveries, though are harder to evaluate. Past weeks experimenting, and further building a knowledge base with construction blocks and structures for new pieces. Instances of these, encompass geometrical shapes, and patterns transposed to the time domain. [Sat 28 Sep 2019 03:44:46 PM PDT]
Xenakis I., Free Stochastic Music (1965) in Xenakis,
Formalized Music: Thought and Mathematics in Composition,
Bloomington: Indiana University Press, 1971, pp.38-42.
Has the Lyric Suite been influential?.
Is Berg's Lyric Suite a benchmark in composition?, you bet!. George Perle's analysis of Alban Berg's masterpiece brings out dozen of clues, useful for constructing ideas, and in many ways, brings back insights on composition with ``pitched sounds''. But whatever the insight, this piece is worth listening many times, not only because of its serial manipulations and atonality, but because of sound. Hope I had studied it at an earlier stage of my life, and my path would have been different. On several issues, this work brings back insights on the why's and how's of new musics, but further, enlighten ideas established by Boulez, Maderna and, Nono. Also on this trail, had to acknowledge that Dallapiccola's searches on the language of sound from serial sequences on instrumental fixtures, complement thoughts not only laid out by the above composers but previously accounted by T. Adorno (and Webern). Musical notes are important but: ``sound and texture enhance performance.''
Have done more experimenting -and on the subject- context based on Dallapiccola's tone rows and heuristics. Mostly, it summarizes on approaches for methods by means of computer aided composition and mathematical modeling. The Lyric Suite was a departure for using tools at hand and others developed ``on purpose,'' for working with cycles and symmetries of PCS's. At the same time, results -also on the subject- have made their way into Do_Marin-TA and Arch Carrellage, which seem completed works for now. More trial-and-error on composition with first-and-second order Ambisonics, in addition to Ambisonics reverb, have also been heard implemented on these couple of pieces. Though, still waiting for more testing of these features on different multi-phonic and multi-speaker layouts. Additional results show that subtle sound-source motion provides good results while delineating spatial acoustics composition. Static sound sources placed ``in'' the sphere, give a 3-D space perspective because of their behavior as a ``sound object''. Further on this path, have found options for blending independent characteristics sprout among atonality, micro tonality, twelve tone tonality, and symmetries, on efforts to frame a language for gestures and identity.
[2019-06-12]
Two of my dearest friend have passed away in the past six months. Sure they deserved more time on this world. Wherever they are now, the above research and these compositions are dedicated to them.[2019-06-24]
Guest editor at "MAVAE", Journal of Music, Visual and Scenic Arts
Guest editor at "MAVAE", Journal of Music, Visual and Scenic Arts, Pontificia Universidad Javeriana, Bogota, Colombia. Vol 14 No.1 with a dossier framed on the state of Sound-Art. Wrote editor notes on historical junctions and surroundings around art-music ET music-art and gearing towards sound-art as an independent and outstanding form. Here, evoking the works of Salvatore Martirano, Iannis Xenakis and Pauline Oliveros, on their legacy, and their plowing of furrows for the "new" on music and arts. Read editorial [here]. Spectrum of articles on this dossier range from sound installations, sound sculptures, semantics, and grammars, to performance, silence, listening, among others.
[2019-01-20]
Works relying on symmetries and modeled acoustic spaces
Working on musical symmetries and modeled acoustical spaces using Ambisonics and other tools. Mostly lab, testing and listening to structures for Arch Carrellage and do_Marin-Ta. Hereby applying symmetrical techniques and signal processing on construction of parts for these pieces. Exploring new lands, perhaps too old for some acousmatic contexts, competitions, etc, but worth the risk. Have found on this path that there's always a finding at home for new expressions. Listen to, or read about Arch Carrellage for more on the above. On YapaYa you can listen to permutations constrained to almost-symmetric tetrachords and hexachords. YapaYa is a piece for Marimba and telepresence on multi phonic spaces. At the moment Arch Carrellage and do_Marin-TA are works in progress.
[2018-12-25]
" A lot is about combinations and combinatorics! "
A lot of composition is about combinations. Pattern recognition can be eased, if a kernel, or rather seed feature in a combination, is found or segmented. For instance, take a rhythm of four measures, there can be twenty four combinations by arranging each one of the measures. If rhythmic duration is symmetric, there can be palindromes, and further on, we might end up with more than forty arrangements of these rhythms. With four measures on segmentation's recognition, remaining rhythms are found just by combining (perhaps permuting) their arrangement and repetitions. By the same means, perception teases us when we look at patterns on Persian rugs. Although we delight from their grandeur, a seed or template might not be apparent at first sight. But we are confident by prejudice that a seed pattern will repeat itself by rotations, flapping, and even transposition, amplification or compression. Certainly this analogy has been used on constructing rhythmic combinations, however, its numerous perspectives can be deceiving. For some listeners there are minimalist periodic repetitions of one pattern. But for others, and from a wider profile, complexities such as self organizing systems arise. Been working on the above for generating source material on several drafts for instrumental sounds pieces. Still hooked George Perle's searches around symmetries on harmonic and time features of music.
[2018-12-10]
Frameworks for art-music
Frameworks for art-music: on the influences for development of works for new music, art-music and sound art. A recount of trails and influences for languages in use today was researched through works of Xenakis and Salvatore Martirano. Have found that Xenakis' Pour la Paix (1981) and Martirano's L's GA (1967) prevail as works that validate the state of art. Aside from technological accomplishments these works carry on a legacy established by Alban Berg's Lyric Suite and the question of what is it that they express?. A composition, a work, should express more than cause-and-effect in order to prevail. Xenakis and Martirano works reveal different humankind dramas by generating a thread of prejudices that live on the listener's mind as fuel for imagination engines. Findings on these topics have led me to various influences from the University of Illinois but in particular from ``EMS'' or Experimental Music Studio at the Music Department. Have to say that it was there where I materialized my interest on Computer Music, and where I met Xenakis and Martirano in person.
[2018-07-28]
Colombian contributions to computer music
Published a section on The Routledge Research Companion to Electronic Music: Reaching out with Technology Emmerson S., editor. Extending the range: gesture, performance, synthesis and telematics, Colombian contributions to Computer Music, on chapter Research-creation in Latin America, pp-34-37. This section portraits how research in computer science by few Colombians was seminal for methods in algorithmic composition and real-time interaction around the world. This essays portraits achievements such as PatchWork (Camilo Rueda), Wiring (Hernando Barragan), in addition to physical modeling and tele-concert accomplishments by Juan Reyes.
[2018-06-08]
6.75[S]ears
6.75 Orejas aka (6.75[S]ears) a new rendition of 4.25 Orejas by Christian Ramones and Juan Reyes. Although its original version was casted as a piece for live electronics and tri-dimensional FM Spectra ostinato, there are more than a few subtle differences. For this version sound sources follow paths along lengthier durations, and on top, short durations contrast with originals. Furthermore, there is a new counterpointing structure comprising of tones following symmetric arrays. Sequences of tones are plunged inside an echo chamber, like a horizontal column hosting symmetric spectra. Original score calls for live performers but this one adds the option for in-situ performance in addition to remote performers and tele-presence. This piece was premiered with Roberto Garcia and the composers on a live-electronics set-up at Matik-Matik in Bogota, on March 2018.
[2018-03-25]
Reflections are part of symmetry in Music
Corner of 17th. and Arch, across the street from Comcast Center Building No.1, by the shores of Schuylkill river on City Center, Philadelphia. Few blocks away, on Rittenhouse Square, found a book on tessellations as examples of a world of symmetries useful for complementing research on symmetries of music. Wrote a toolbox of helper functions for applications of tilings in Lisp to also assist on the manipulation of Schoenberg's tone rows and George Perle's,``twelve tone tonality,'' these also known as horizontal plus vertical interval symmetries. Had the notion of first applying tessellations to form and structure in composition but found that symmetric features have been applied to ``grundgestalt'' using intervals and tone rows for a while now. Grundlagen, and leitmotifs on composition of a piece, its connections and assembly have made use of different sets of tessellations for constructions and design thinking. As previously stated, computer models for searching a suitable ``grundlagen'' for a piece require some knowledge of combinatorics as well as group theory and linear algebra. Some relate this topic to topology and Galois theory. Deepening my insights on this subject goes beyond the scope of this blurb.
Hoping to find solutions by implementing mappings of the above on mini structures and on super structures. Experimentation on perception needs to be done for several structural applications of this kind of symmetries on structural forms of music. Tessellations can also be implemented to dynamic intensity panning on the time domain as well as in other time domain parameters of acoustic and sound manipulations. More reporting on the subject should be expected. Result of the above research: Symmetrical Reflections of WaWa, possibly an essay, a composition or an installation of sound and light reflections. Wish I got a shot of this WaWa image. Not a photographer, what a shame I missed the moment!. My gratitude to the University of Pennsylvania and all my friendships in Phily. [Something about WaWa]
[2018-02-05]
Millions of Bells in the Cathedral
Walked into Saints Peter and Paul cathedral and got impressed as when walking through rooms at La Alhambra in Granada, Spain. Difference here was that perception is not as related to pattern matching of visual sights but instead as to thousands of aural clues radiated from at least a quarter of a sphere beyond the transept of this space. Moving from one position to another along the nave seemed that reflections would only marginally change. Several insights came to mind. If symmetries are part of pattern recognition while looking at floors and ceilings on La Alhambra, could a similar behavior arise product of reflections and resonances on this cathedral as well as on spaces alike?.
Could we model this behavior for artificial reverberation or Higher order Ambisonics' reverberation. How symmetries affect resonant modes on big spaces like this. Would symmetric reflecting paths affect localization clues for sound sources. A side approach for resolving these questions was that of taking impulse responses on different positions using an Ambisonics' microphone approach on three omni-directional microphones using a 'x-y-z' pattern. These responses would later be used on the Zita Convolver for convolution reverb and perhaps some Matlab modeling to adjust parameters. Although still a hunch, there might be some symmetrical perception effects result of colliding impulse responses from different points of sight in the cathedral. The above points can also be summoned on Barry Blesser's book ``Spaces speak, are you listening''([1] (pg 247):
``Think of a cathedral as millions of bells(resonating
oscillators), each with its own pitch(resonance
frequency), and each with a slightly different decay
rate(reverberation time). The clarinet sound (or a flue
sound) rings (excites) only those bells with a pitch
corresponding to the frequency content of the clarinet
(or flue).- In other words, you are actually hearing the
bells of space, not the original clarinet (or flue)
sounds-.''
- Equally inspiring Messiaen on the cathedral's 100+ ranks organ!.-
[1] Blesser B. and LB. Salter, Spaces Speak, Are You Listening? Experiencing Aural Architecture MIT Press, Cambridge, USA, September, 2009.
[2017-11-15]
At 143rd AES New York 2017
Got invitations for AES-2017 143 International convention at Javits Center in New York City. Not presenting paper this time though. Primary objective was to find out about commercial applications for First Order Ambisonics (FOA). Also to seek for network-Ethernet speaker-monitor implementations on multi channel applications like High Order Ambisonics (HOA). Got to listen and test Waves Ambisonics plug-in with bin-aural modeling on headphones. To my taste, ``sounds good but is not the real thing''. Further on AES show, got lucky to be in an almost bis-a-bis demonstration of JBL-7 series monitors. They demoed 7:2 and 9:2 applications for studio monitoring in addition to a further monitor count on Dolby Atmos surround environments. This last demonstration was ``as close as it would sound'' to HOA 3-D audio diffusion. JBL 7 Series sound great plus there is a passive version network-able to some extend. Amplification calibration and equalization is done through a networked system called Intonato 24 and multi-channel amplifiers easing installations and portability. -Not that we are endorsing this system but seems like a good option-. Wish that audio through Ethernet were beyond its dawn phase and that more companies were supporting more open source efforts. Thanks to Waves and JBL for taking my concerns.
[2017-10-19]
Data-sets for manipulating hexachords from tone-rows
Still on the question or rather inquiry of ``past is new'' in relation to various aspects of creative processes. Had to return to scrapbook notes on composition with tone rows and pitch classes. Perhaps the musical event in contrast with the sound event or the overlapping of both. Seems that due to some complexities arising from working with sets of pitch classes (PCS), manipulation of resulting data might be bounded to set theory or combinatorics. No news though, but an opportunity for a window on ``computer assisted composition'' (CAC). Common rationale on these methods are often portrayed as keeping up with bunch of calculations and operations which by hand might end up as tedious or cumbersome processes (also a reason why many people avoid this kind of procedures). Models assist on dealing with this data and its transcendence into musical meaning.
Had the idea of working on a composition using two tone rows. Given that a tone row is a list of numbers representing pitches or note events, these lists can be stored on data structures such as arrays, vectors or lists themselves as in the case of Lisp. A hexachord (HC) is a subset of a tone row and a list with fewer elements. Being so, each HC has its own identity. Symmetries are tallied by taking resulting HCs from the prime form of the row, its retrograde, inversion and retrograde inversion. -For this research had to concentrate on prime forms and inversions to make things manageable-. Wrote algorithms in addition to helper functions to access tone rows as data sets, for casting HCs, and to further find combinations and permutations for chord candidates.
Trial and error resulting from computer modeling and prototyping on horizontal samples for laying out hexachords has helped on finding sequences transcending to musical events and fulfilling composition ambitions. Though most tests end up being done by ear. Note event sequences are the result of ``pattern processing'' using methods like weightings, cycles, rotations and palindromes. Common Music (CMv2) still alive and useful for CAC, and for implementing these procedures (thanks to Anders Vinjar, Torsten Anders, Tito Latini, Rick Taube, and others). Worth saying, heuristics have been focused on symmetries given by hexachords, rhythmic measures and others. Not to say the least, the above is currently used as source in a piece for tele-performance and marimba named Yapaya.
[2017-09-05]
On a side note: a recurrent insight in regards to
composition of new works is that of the 'concert'
as a form of presenting time-based works. ``Concert
presentations should be continuously reevaluated''.
Twenty(plus) years of Equus and Resonances
This composition served as incidental music for a stage production of Equus by Peter Shaffer in Bogota, featuring prominent actors, dancers and production team. It was among the first time computer music took a role on a stage play in Colombia. Synthesis of computer music was rendered -not in real-time- using Csound and Common Music on a Macintosh II computer. Several of the signal processing techniques included phase vocoder, FOF, convolution of spectra, as well as FM and substractive synthesis. Rendering of a one minute single channel of audio took several hours with this kind of hardware. [read more and listen to some excerpts]
[2017-05-06]
Spectra and Tranversal Sonora November 2016: Final remarks at a composers' gathering
Following are final remarks, and what was forgotten, or simply couldn't be said during a composer's round table that turned out to be a composer's gathering while at Spectra et Transversal Sonora new music venues in Bogota on November 2016. For this purpose Roger Reynolds composer and mentor of mentors is quoted from his book, Mind Models, New Forms of Digital Experience [1], first published more than forty years ago.
The most basic and still expanding capacity of human
intelligence is the ability to retain 'images' of experience
and to influence subsequent behavior by drawing upon them in
the absence of conscious volition.
Further,
If we are to to expand the size of our internal and
emotional space art -as an integrated individual response to
life in a given society- is an efficient agent to this
purpose.
Questions of academics, artistic responsibility, aesthetic
conscience, dexterities, social duties among others were
surfaced a this meeting. Next statement summons a basis for
ideas portrayed: Because of synchrony between art
ideas and techniques for realizing them, composers today are
more than music theory advocates.
Implying that modern
creation with all means at hand is conceived and perceived
on several layers by all senses, and not so constrained to
the eyes and ears. As suggested on Reynolds' quote at
the beginning, a work should trace an image on the minds of
listeners.
If as artists we should assimilate our surroundings, the
ability to tackle different domains can pose some
complexity. But the sole issue of focusing on 'the
image' invites inspiration for inter-and-trans
disciplinary cooperation. It might be that we are not so
dreaming anymore when we find musical scores representing
perspectives, differential equations, vectors, matrices, and
etc. -And who said being an artist was an easy way of life.-
Quoting R. Reynolds again, It is crucial to develop an
understanding of the present expansion of material and
means.
However, be clear that dexterity on a
particular technique might not be enough.
"New music" might not sound sweet to strands of music tastes but this should not restrain artists from efforts to compose new works. If art as a response in a given society, and provided that there is always someone willing to enlarge her/his mental and emotional world through new aesthetic experiences, creators should extend boundaries so that a piece is a remark and listeners are gratified by their own discoveries. A remark on a piece or a work of art might as well be the voice of the unconscious or even the conscious. To make a statement through a piece, the work must be executed, presented and perceived so that the listener apprehends those extended boundaries mentioned before.
[2016-11-26]
[1] Reynolds R., Mind models: new forms of musical
experience,First Edition, Praeger,USA, 1975
Semana del Arte @ PUJ, P. Universidad Javeriana, Santiago de Cali
One of guest keynote speakers at Semana del Arte (ART-WEEK), organized by artist and dean Sofia Suarez of the fine arts department. For the occasion, a new instance of 'Marimonda Sketches', now called Arimond came into being. As the notion of spatial textures, as well as 3D perception of music was trying to be portrayed, Arimond posted several challenges in order to become a second order Ambisonics piece. Like on the original, sound paths follow an infinity-figure-path along the plane, or perhaps the sphere, because of a metaphor based on real elephant ears (not the plant). The ears of the marimonda, a character of the Barranquilla Carnival, resemble those of the elephant, and marimondas claim "to hear music better".
Furthermore, because of factors function of Newtonian physics such as speed, time and distance for motion in space, a diversity of patterns following the marimonda metaphor have been achieved. Add this to context features like room-size plus reverberation and a variety of gestures come out, while dealing with this sort of composition. An infinity-like path can be obtained by using schemes such as those of Lissajous Figures, Spirogrpaphs, or maybe patterns commonly used on the Jacquard loom (listen to J. Chowning's Turenas for more on the subject). On a technical side note, used S7 Scheme programming to debug old code and to model spatial patterns for Arimond. In particular fine tuning of delay lines size.
On a final note, wish spatial features in music were better understood by new electroacoustic audiences. As in the past, sometimes they come as a shock for some people or as sound effects of fiction movies for others. - Met wonderful students, colleagues and friends. My gratitude to Sofia, Maria, Daniela, Lorena, Sebastian, Paula, Coco, and Santiago Rueda.-
[2016-10-06]
Dynamic patterns for motion of sound on ``Os grilos''
Article on Sonic Ideas/Ideas Sónicas Vol 8 No. 16
Motion of sound sources on multiphonic compositions can often become artificial and nonsense. Most composers tackling the issue, carefully aboard the space parameter. One direction, not so artificial, and from a perception standpoint, deals on how we trace with sound textures on a 3D space. On Os Grilos, a computer music and multiphonic piece, sound textures are cast using Scanned Synthesis techniques. Thereon textures are scattered through various planes by means of several natural tracing techniques such as Lissajous Figures and similar patterns. As explained by Pablo Di Liscia on his introduction of Volume 8 No. 16 of Sonic Ideas, this article describes nuances to achieve spatial manipulation and haptics for the sound of this piece.
[2016-06-02]
Expyezp: constructivism avoiding data redundancy
A paper and presentation at BunB(2016) conference describing ExpyeZp, an -all inclusive- collaborative effort aimed towards discussion of new music, science and technology. On its beginnings ExpyeZp was a colloquium and physical gathering evolving through the years into an Internet mailing list and a forum reaching an audience throughout Latinamerica and beyond. On its modus operandi is seeded on the assumption that there are not bad ideas and further no idea is better than other. Consequently interactions and efforts strive on avoiding two or more ideas on the same concept otherwise known as data or information redundancy. This article portraits traditional hacker's postures on community development, confrontation of ideas in constructive mannerisms for building knowledge and ingredients used on development and creation of new forms of arts. [Presentation Slides] [article].
[2016-05-10]
Searching gestures with LPC
Neat sounds have came up while using speech signals and bowed string sounds through LPC. Ever since I met Paul Lansky a dozen or so back ICMCs, I've been intrigued about LPC. -And speaking of getting those FFTs right-, "hands on because you don't know until you try it". Paul Lansky's computer music encircle emphatic rhythmic components on most of his compositions and thus contrast with other pieces. In our chat at ICMC he surfaced core facets in LPC, but at the time I did not get all ideas. To my defense, LPC software was out of reach unless you had access to NeXT computers or DEC Mainframes. Even so, analysis and re-synthesis were slow and patience teasers, just like with the Phase Vocoder. Mid nineties Csound will run on Macintosh II therefore enabling analysis tools for re-synthesis accessible on desktop machines.
More recently Josh Parmenter while at DxArts wrote 'ugens' implementing LPC and other analysis tools for SuperCollider based on code suggestions in Dick Moore book EOFCM, and fast enough to bring back my attention to the subject matter. While analysis is not real time, re-synthesis and transformations can be done on the fly. A real time implementation of LPC is rt_lpc developed at Princeton by Perry Cook, Ge Wang and others. It reminds me of Paul Lansky keynote speech at ICMC-89, Ohio State. Although I don't want to appropriate Lansky's sound on Idle Chatter or from any of his pieces, at this time it seems feasible to experiment, because on parallel layers, LPC signals are also control signals. Aside from carrier signals for amplitude modulation, FM or PM, LPC analysis is also a good pitch tracker.
[2016-03-30]
What about those applications of the Fourier Transform?
On a recent discussion among new generation of spanish speaking composer/performers and lone wolf somewhat older composers, the issue of timbre exhaustion came on [see here]. It has been seen -and listened- that recent performances and composition of young creators comprises a constrained set of sounds and timbre manipulation techniques. One reason might be easiness and availability of real time audio processing. Young ones complain: "If we have 'rt' tools why should we explore further". Few of us were surprised that applications of the Fourier transform are unknown or seldom used by new practitioners of computer music. Therefore terminology like Convolution, Phase Vocoder, LPC, FOF, and Spectral modeling, seems Greek to many. While they know about commercial software packages, few care about the FFT. Is this good or bad, an answer rather lies in the domain of aesthetics. FTR, these notes are being written while waiting for the LPC analysis of a sound file.
[2016-02-20]
TikiTik performed on TeleMAMM _@_ Radical Chamber
TikiTik
performed on TeleMAMM at Radical Chamber
series (cámara radical) on the Modern Art Museum of Medellin
(MAMM). Telepresence for this piece featuring Elena
Fuentes(violin), Simón Castaño(water flute), Miguel
Vargas(armonica) and Andrés Sampedro(IT & systems-ad
bureaucracy). This tele-concert -first of its class in
a private museum on these latitudes-, also featured Simón
Castaño's Canción de Mantas for distributed
ensembles, Mario Valencia's Mirror for remote
percussionists, and Terry Riley's In C. Oscar
Ceballos at Caldas University in Manizales(UdeC) and
Fernando Mora at Antioquia Univestity in Medellín(UdeA)
conducted ensembles consisting of guitars and electric
guitars in addition to flutes and percussion. Mario
Valencia, Sebastian Castaño and Fernando Mora developed and
implemented a MAX/Jitter patch for close-to-real-time
visuals among the three remote locations of the concert,
namely UdeA, UdeC and MAMM.
Jacktrip on Linux and Mac was used for audio over UDP
and to connect all concert sites. Audio system at MAMM
featured a second-order Ambisonics setup.
[2015-11-24]
Restoring old compositions "A curious Character" (curioso caracter)
Some Ampex
456 tapes were baked in order to rescue "Curioso
Caracter", a Musique
Concrete piece done in the early nineties and dedicated
to maverick educator Ernesto Bein. At the time a
Sony TCD-D10 PRO DAT digital recorder with a stereo
microphone was borrowed to record source material at
"El Moderno", a school in Bogota on the middle of
the prestigious neighborhood of El Nogal. Nevertheless
most signal processing was analog using tape manipulation
techniques on MCI
Recorders and AKG Reverberation
plate and spring units. Sound sources for this piece were
those of the school between 10:00AM and 12:30PM back in
1990. Worth to point out were bell sounds coming from a
nun cloister next to the school which indicated 30 minute
intervals. In addition to boys' voices there were pigeon
sounds and bird singing. This school housed pigeons in a
'palomar' feeding them every day at 10:00AM. Some of the
captured material is taken from students doing these
feedings. "Curioso Caracter" though Musique
Concrete, today might be tagged as a
"Soundscape". [2015-09-15]
Tele Espacios Activos II
A telematic art exhibit and gathering including
teleconcerts, installations, and workshops. Among
participants Jacktrip developer Juan Pablo
Caceres, trombonist and UC Irvine Music Dept. chair Michael Dessen and telematic
arts maverick Chris
Chafe. Teleconcerts included works by José Gallardo,
Juan Pablo Caceres, Bruno Ruviaro, Juan Reyes and Hector
Fabio Torres. Organized by Juan Reyes and Mario Valencia. [2015-05]
TikiTiK
A multiphonic computer music using eight channels et
Ambisonics for live telematic performances.
Os Grilos at Triple CCRMAlite 40, 50, 80
'Os Grilos' for Scanned Synthesis and second-order
Ambisonics was programmed at one of the concerts at the
Triple CCRMAlite venue. John Chowning encouraged Juan Reyes
on keep working on this type of synthesis. See more of the
story at the
program notes. Thanks a lot to Eoin Callery and
Nette Worthey for their support and encouragement.
[2014-10]
Os Grilos
A Multichannel second-order Ambisonics computer music using Bill
Verplank's and Max Mathews'
Scanned Synthesis.
Delay lines, Leslie and Moving Sources
Research on implementation of delays and moving sound
sources with application to the Leslie speaker. ChucK,
Scheme and Lisp code that implements Smith, Serafin, et al,
"Doppler simulation and the Leslie", Web page
[here].
Tele Espacios Activos I
Grain of salt to 2014 International Festival of Image in Manizales, Colombia. First rendition of Tele Espacios Activos featured a teleconcert with performances between Manizales and Stanford, and between Manizales, Cali and Medellín, Colombia. Natalia Castellanos (at U. de Caldas) and Juan Reyes (at Stanford) had a tele performance of Reyes' Open Spaces on this venue.
CCRMA's World Teleconcert
Joined performers and composers from around the world. While
main stage was at Stanford Bing auditorium, there were
other stages around the planet. Tele-espacios
Abiertos was performed by Zhengshan Shi at Bing as well
as Lilian Campesato and Julian Jaramillo in Sao Paulo and
Daniel Osorio, Mario Valencia and Juan Reyes in
Manizales. View
concert performance.
Tele-espacios abiertos (Open Spaces)
A composition for telematic performance as well as a video
sound installation. Sound sources and material come from
physical models of
banded waveguides, namely a model of the Tibetan
singing bowl programmed by the composer. More about banded wageguides on Georg Essl web page.
Arcured-2013
Invited to Arcured in Barranquilla Colombia. Arcured is a
network of culture and arts conformed by members of academic
institutions affiliated to Red Clara
throughout Latinamerica. This conference aimed to professors
and artists focused on new network technologies applied to
the arts. Gave a talk on Group
Interaction and Telepresence at Simon Bolivar
University. Also a performance of Computer Music concert at
MAMB, Museo de Arte
Moderno de Barranquilla and a Colombian Tele-concert
featuring Cuatro25 Orejas for piano, electronics
and visuals with performers at Icesi Univerity in Cali,
Caldas University in Manizales and Juan Reyes at the piano
at Universidad Autonoma in Barranquilla.
Congreso Internacional de Artes del Caribe
Invited talk at the Caribbean Art Congress at Cartagena,
Colombia. Organized by the School of Arts of Bolivar and
University of Antioquia, this congress was a gathering
to reflect on art and technology. Subject of this talk
focused on spontaneity on music performance and the arts.
A celebration with John Chowning in Bogota
A celebration of John Chowning's return to Bogota this
time at Javeriana University. This venue was a four(+) days
gathering and included talks and a composers colloquium in
addition to Maureen and John Chowning concert and a
telematic performance of John Cage's Four6 between
Bogota, Sao Paulo and Stanford. This venue was organized by
Ricardo Escallon and Juan Reyes.
FLAMIM-2012
FLAMIM or Latinamerican forum for new musical interfaces
inside frameworks of Diseño(+) at Icesi University in Cali,
Colombia. Joined Wendy
Ju, and fellow colleagues Michael Gurevich and Jaime Oliver for panels,
workshops and discussions on the subject matter of music
interaction. Part of this forum included a teleconcert (first
of its kind using advanced networks in Colombia), with
performers at University of Michigan, Stanford University and
Icesi University. Performers included Stephen Rush and his Digital Ensemble
at Ann Arbor, Chris Chafe, Roberto Morales, Rob Hamilton, et
al, at Stanford, as well as Jaime Oliver, Michael Gurevich,
Daniel Gomez, Juan Reyes and others at Icesi. Other
performances at FLAMIM featured a new realization of John
Cage’s Rozart Mix in honor of the composer’s centennial
directed by Michael Gurevich. Program also featured Juan Reyes' Oranged
(lima-limón) for multi-channel tape. This venue
was organized by Maria Clara Betancourt, Daniel Gomez and Juan
Reyes.
Sudamérica Electrónica Vol. 7
An electronic arts exhibit with Colombian artists at Caraffa Museum in
Cordoba, Argentina. On the program there was a live performance
of Horace
in San Mateo and improvisation on piano of Cuatro25
Orejas for piano and live electronics at La Cúpula
Gallery. Venue curated by Jorge Castro and supported by the
Alzate Avendaño Foundation of Colombia.
Horace in San Mateo
A realtime algorithmic computer music and live electronics
piece for "Fender Rhodes" (if possible) and pianoforte
Sounds. The piece follows on the spirit of the augmented
ninth dominant or plus eleventh chords seldom used by
pianist Horace Silver on several of his compositions.
Chuchoter
This is a sound installation as well as an eight-channel
tape music composition. In this piece sounds travel a
path inside a boxed environment obtained by Lissajous
patterns applied to intensity panning on each source.
Sonare
A sound art exhibit featuring Chuchoter
as well as other works from various artists it the Museum of
Modern Art of Bogota. This exhibit also featured talks
on the subject matters of computer music and new ways of
expression and performance.
Dropouts
A sound art exhibit featuring FtheF
for visuals and modeled bowed strings and artificial
performer. This is a multichannel composition and sound space
intervention. Sound sources come from the physical model of
the bowed
string.
FtheF
This is a new rendition of the original ``Freddie the
Friedlander'' for physical model of the bowed string plus
artificial performer. This version adds visuals and color in
an attempt to image-by-synesthesia fabrication.
ExpySaxn
This is an algorithmic miniature piece for sound samples
of saxophone. In its code the algorithm pursues J. C. Risset
rhythmic paradoxes,
which in this case give the
illusion of an ever lasting accelerandi o decrescendi. This
piece is dedicated to music education advocate and winds
performer Terry Mohn.
Art and Gesture Seminar
This is a course offered for students interested in
symbols, semantics and grammars of gesture on various
domains. Initially conceived for music performance students have
also found applications on visual and space arts as well as
dance and body art.
Not too recent but current:
Pages for workshop on Elements for Electroacoustic Composition
A tutorial for electroacustic composition or sound art
production 'en español'. Used on several workshops at
Universities and institutions throughout Latinamerica.
AVRLIB on Wiring and Arduino
Just in case if Wiring or Arduino frameworks are not enough. This pages are an introduction for using Pascal Stang´s AVRLIB and AVR-GCC on Wiring and Arduino boards.