Final Exam

Question 1:
*1. Discuss the (hypothetical) encoding of melodic material in a repertory with which you are familiar (for example, music for piano, guitar, orchestra, chamber music, band) for (a) notation, (b) sound, and (c) analytical applications. What problems could be encountered in this repertory for each of the three domains? To what extent are the problems solvable?

Most of what I do musically could be categorized as "electronic music", even though sometimes it doesn't actually fit that genre. But in the case of actual "electronic" music, ie: a Third Option techno/electronica/dance record, encoding is quite a different ball game than say, for a Mozart opera. It's not necessarily all that different in one sense, though. But there is a distinction to be made, and that is between the preservation of an actual performance, and the preservation of data that makes ANOTHER performance possible. Common music notation is the latter. And of course, in Mozart's time, the former wasn't possible, except perhaps in the memories of people watching. From a notation standpoint, the two types of music could be dealt with the same way. In an opera, one might have a line of shapes representing pitch and duration, with notes scrawled next to them as to what instrument should be used. In the actual melody line, there might be further markings telling players to do things that vary timbre or style, such as staccato or pedal markings. The same could be done with "techno" music, simply by adding new symbols in cases where something couldn't be represented using what we currently have. Even notes on when to push certain start and stop buttons or where to place loudspeakers could be given. In fact this is done, it's even done in "rock" music venues; for example, if a band comes to a club's sound engineer with a schematic of mic and amplifier placements. Essentially this is notation.

It might be fitting to note (no pun intended!) however, that most of what I just described is about timbre, not melody. I see melody as a very very very small part of the overall picture when it comes to a given piece of music, although it is the thing that can easiest be taken away from a performance by a listener, and recreate later in myriad different forms. Melody is the also the most flexible aspect. A melody stays a melody, even when it is in a different octave, or a different key, or a different instrument. All that must be preserved in order to recreate a melody is durations of notes and the relationship between their pitches. So notating this is relatively easy.

That also means that analyzing melody is the easiest part of analyzing music. A series of notes, played one after the other, is easy to encode in so many ways, that it is then easy to analyze and manipulate.

eC eC sD sEf eF qG

See how easy it is to just make up some way to encode a series of note durations and pitches? (Two eighth note Cs, sixteenth note D, sixteenth note E flat, etc. - I'm always in c minor!)

The complication comes when timbre is introduced. Adding octave numbers, for example, complicates the matter (and octave is indeed a matter of timbre, for a melody sung by a baritone is still the same melody when song by an alto, even though the result is quite different). Of course, then adding harmony, and instruments, and what the instruments are doing (again, do they have the pedal down, or the reverb unit on, or distortion pedal turned up?). As all of this information gets added in, analyzation gets harder and harder.

Still, anything that can be written down so that a human can read it on a piece of paper without any decision being made as to the meaning of markings that could mean multiple things, a computer can understand, and/or a human can think about in an analytical fashion, at least theorhetically. So when it comes to the concept of the preservation of data that makes a new performance possible, I'd say almost any problem is solvable.

However, there are things beyond our control, and no matter how ridgid a set of notes gets, a new performance is still a new performance, and will be different in subtle and not subtle ways from any other performance since or again. Even the play back of a recorded song will sound different every time. We may not realize it because we've trained ourselves not to notice the difference when we use a different set of headphones, or speakers, or our ears are hearing differently because it's night instead of morning (they do indeed), or the acoustics of the room we're in are different than the last room.

Of course now I've gotten into sound, which moves me into the notion of "preservation of an actual performance", which is what sound (or video) recording is about. In theory, a sound recording doesn't give information about HOW to perform a given piece. (It does, actually, because a person could listen and "reverse engineer" certain if not all aspects, depending on how good they are at such things.) A sound recording is a snapshot of something done, and indeed it does preserve a lot of those one time irregularities that can can be captured by notating. Of course like I said, even a new playback of a recording introduces new nuances that weren't there the last time it was played back, but still, there ARE aspects of the performance that are captured, rather than described.

This is where the new genres of music like electronica deviate from someone like Mozart or Bach. Now, because things can be preserved in this way, it becomes less important for a given composer to write out things like music notation. Often I wonder if Bach had the tools we have, would he have bothered with writing things out? Or was this just the only way he had to preserve what he had in his head? I don't know about Bach, but I certainly DO write things on staff paper for example. But it means something else. It's a way to extract certain information and purposefully REMOVE other information, so that I can cause a new part or piece to develope. For example, if I've got a nice techno mix with a pretty melody, I might transcribe the melody on staff paper for a violinist to play on a different recording or at a performance. My INTENTION is for something other than my original idea to happen.

All of this points to a basic dichotomy that I see when I think of music, one which is at times beautiful and at times frustrating to the nines! And that is partly that distinction I made between preserving performances and preserving data that CREATES performances. As musicians, we are always either encoding or performing, and the two are very different. Still, they go together, and they have pieces of eachother in them, like when a sound recording becomes a different performance because of the room its in, or when part of a performance is to push a button that plays back a sound recording (ie: a sample - interesting that playing a Mozart piano concerto on a Yahama keyboard is actually pushing play a bunch of times on small sound recordings!).

All this having been said, another thing comes to mind. I'm not sure why, if we were truly wise, we would seek to encode at all. It seems to me that encoding is simply our attempt to control something, to desperately try to bring back something we enjoyed once, as if we were afraid we would never enjoy anything again. Or in the case of a composer, writing things down compulsively, trying to control what happens in the symphony hall, afraid that his creative juice will one day run out, and the only way to survive is to preserve some record of his genius so that people will remember. Mortals grasping at straws, trying to become immortal. Faithless creatures not realizing that music is in the moment.



Question 2:
2. Select one domain of musical information (sound, graphics, analytical or “logical” data). Which attributes of musical information are usually required to manipulate data in this domain? How do the requirements of this domain differ from those of other domains? (To make this tangible, you could relate it to the kind of music you discussed in No. 1).

I'll try to talk more specifically here about my process, with regard to sound, as that is always my final goal. Basically, again, there are two distinct things I end up working with: 1) Data that preserves an actual sound recording (ie: audio files) and data that makes creating sound in a way I want possible (ie: MIDI files or music notation).

In the case of an audio file, data is manipulated as sound, and we're not talking about notes, or durations or really anything "musical". A sound file basically consists of only one thing: a list of amplitudes over time. Of course I never really have to think that deeply. At this level, I may be thinking about EQ, removing certain frequencies or boosting certain frequencies to create timbres I want, or I may be talking about reverberation, or simple variations in loudness. I may be thinking about placement across speakers or cutting and pasting a sound file around. These attributes become musical, but they're not "classically musical". There's no attribute from, say, a score, that is preserved here. Audio files don't contain duration information or pitch classes or anything other than a list of amplitudes, and everything I do is about changing one, some or all of those amplitudes to create a different sound. In essence, this is what all music is. Notice, however, and I find this incredibly important to remind people of, that we DO NOT NEED TO UNDERSTAND which amplitudes in the list are being changed in order to create what we wish. (Did the first violin makers understand the mathmatics of acoustics? - no). In fact, working at the individual sample level like this usually yields nothing.

The other case is data that creates sound. I might, for example, work with a MIDI file. A MIDI file will be more closely related to common music notation, but is still different. MIDI files don't need any data about graphics such as staff lines and such. They also don't need to preserve enharmonic spellings like the difference between E flat and D sharp. All they need is an absolute pitch and a duration. However, they do have other information that doesn't usually make it into a score, such as tempo, which is a big one. I will need tempo information in order to link my MIDI data with my already created sound data. Then I will need attributes such as what type of instrument to play, and where to route the MIDI stream so that the software synth or hardware synth will be used to play this sound data.

Generally, I need anything that is directly to do with sound. I don't need graphical data because generally I won't be printing scores. So I don't need note shapes, enharmonic spelling, titles, staff lines, margins, note spacing, slur shapes, staccato markings (something like that would be communicated in a MIDI file simply by a short duration), or anything visual in nature that doesn't have something to do with sound. I don't care, for example, how 8th notes are barred for readability. I also don't need measure numbers, analytical data such as supertonic, tonic, dominant, this kind of thing. I don't need to know any of that. I might need music notation so that I can get a player to play something, but I don't need it to be particularly beautiful, as long as the player gets the idea. In fact, I could scrawl out a series of notes on a piece of notebook paper if the player is capable of getting it done.

In fact, in the end, what I need is a series of amplitudes :)