The Processed Typewriter
Other than the human voice, musical instruments convey primarily abstraction through sound content. We interpret these sounds as music to varying degrees, but if one were to step away from the cultural associations, the noise would remain highly ambiguous. With a typewriter the sounds inherent in the machine's use also contain linguistic meaning. Having this added layer to work with, a composer could pair the text and the sounds in a multitude of ways, even utilizing the ambiguity of semantic meaning with the ill-defined meaning of typewriter sounds. For this project I am specifically thinking towards a performance in the late spring during a residency with famed soprano Tony Arnold. Rather than a typical accompaniment for a solo soprano piece, like as a piano, it would be much more interesting and musically fertile to have her singing lyrics which are actively being typed in the background. Not only is the text being transformed into sound through the vocal line, but also the hammering away of the typewriter. Furthermore, these sounds and the images of the text appearing on the page would be processed, enabling a wide range of articulations, imagery, references, and audio sculpting.
Contents
Preliminary Designs/Drawings
Before photos of the 1950's manual typewriter:
For Performance in a Concert Hall
For Performance in an Art Gallery
Intermediary Photos
Examples of Similar Work
Automating a mechanical typewriter
Julian Koschwitz's Typewriter Installation
Three Lists of Project Goals
a) Things that need to be completed for a minimal viable product
The most basic goal of mine with this project is the sound processing of the typewriter. As I mentioned above, it needs to be incredibly nuanced and yet allow for a great deal of variation timbrally. At this point, my concept for the compositional form is 32 microludes (short, but intense musical gestures lastly 12-30 seconds each). As the choice in form might suggest, I seek to make each microlude distinct musically, but also succeed both as individual fragments and in relation to the larger set. To achieve this I plan to create 32 preset sound presets, controls and settings already adjusted beforehand to facilitate use. During the performance rather than worry about having to actively (and quickly) modify the various parameters to correctly fit each microlude, I can simply click through the presets. This will allow me to replicate the sounds exactly from performance to performance and also cut down on potential user errors. Parameters I am considering right now include filters (low-pass, high-pass, band-pass), reverb, delay, vibrato, resonance, pitch modulation, multiphonics, and so on. On the technical side of things, the patch would need to be programmed in MAX/MSP and the hardware would need to be set up/wired up accordingly.
b) Things that I want to have done by the final deadline
In addition to all of the sound processing described in (a), I hope to create the same level of nuance and variation in the visual domain. Also in MAX I would program 32 processing presets for the webcam stream that is taking in video of the page being typed in real-time. It would be great to apply some similar parameters that were used to craft the sound when crafting the video effects. Representing auditory transformations in the visual world will not be a copy in method, but rather a suggested relationship between two modes of thinking. Coordinating and calibrating the numerous presets happening concurrently will require a lot of troubleshooting, but will yield an artistic gesture will greater conviction.
c) Things that would be nice to have if I had unlimited time
As seen in some of the examples of similar works, it would be fantastic for the manual typewriter was automated. After I "performed" the ideal piece on the typewriter myself, if it were automated I could have the dramatic/theatrical element seen in the physical device present for any concert, while ensuring absolute accuracy in execution. This would make it a lot easier for the soprano, knowing exactly how the typewriter part will go, such as a tape part, but with the real process happening on stage. This is similar to a player piano, having rolls that represent a given interpretation played back through the instrument that it was first performed on. This would require a tremendous amount of technical study, parts, and time. Perhaps I could use MIDI values to control the different physical parameters of automation (attack time, pitch being replaced by specific key, etc).
Also if I were given unlimited time I would work on composing the piece and fine tuning the patch within the practicalities of the music. For this piece I am thinking of dichromacy as a basis for representing the audio-visual transformations between the microludes. Dichromacy (di meaning "two" and chroma meaning "color") is the state of having two types of functioning color receptors, called cone cells, in the eyes. Organisms with dichromacy are called dichromats. Dichromats can match any color they see with a mixture of no more than two pure spectral lights. For this work would be interested in red-blue dichromacy. Here is an example of how the two color spectrums would connect the microludes together:
1. Cool Black (#000000) 0% 0% Solo typewriter
2. Navy (#000080) 0% 50%
3. Dark Blue (#00008B) 0% 55%
4. Duke Blue (#00009C) 0% 61%
5. Medium Blue (#0000CD) 0% 80%
6. Blue (#0000FF) 0% 100%
7. Electric Ultramarine (#3F00FF) 25% 100%
8. Indigo (#4B0082) 29% 51%
9. Patriarch (#800080) 50% 50%
10. Vivid Orchard (#CC00FF) 80% 100%
11. White (#FFFFFF) 100% 100% Solo voice
12. Vivid Raspberry (#FF006C) 100% 42%
13. Rubine Red (#D10056) 82% 34%
14. Folly (#FF004F) 100% 31%
15. Rich Carmine (#D70040) 84% 25%
16. Munsell (#F2003C) 95% 24%
17. Crimson Glory (#BE0032) 75% 20%
18. Ruddy (#FF0028) 100% 16%
19. Spanish Red (#E60026) 90% 15%
20. Burgundy (#800020) 50% 13%
21. Cadmium Red (#E30022) 89% 13%
22. Carmine (#960018) 59% 9%
23. Red Devil (#860111) 53% 7%
24. Sangria (#92000A) 57% 4%
25. Rosewood (#65000B) 40% 4%
26. Red (#FF0000) 100% 0%
27. Rosso Corsa (#D40000) 83% 0%
28. Dark Candy Apple (#A40000) 64% 0%
29. Crimson (#990000) 60% 0%
30. Dark Red (#8B0000) 55% 0%
31. Maroon (#800000) 50% 0%
32. Cool Black (#000000) 0% 0% Solo typewriter
Materials Needed
A speaker for hearing the processed sounds, preferably a stereo PA system
A webcam with an internal microphone
A manual typewriter
A piezo
An arduino and necessary electronic parts for hooking up the piezo and laptop
A laptop computer with MAX/MSP
A means to fasten the piezo inside of the typewriter
A stand or some other fixed structure to secure the webcam in place
Audio cables between the laptop and speaker
Digital projector to see the processed video feed
White, blank letter size (8.5"x11") paper
Replacement typewriter ribbons (if needed)
List of Steps to Achieve the Minimal Viable Product
I predict the most time intensive part of making this idea happen will be creating a patch or several that allow for highly refined real-time processing of the video and sound. It would be interesting if these were both mapped to the same parameters, so that any given transformation in the sound of the typewriter will reliably also transform the text feed in an expressive way. Looking into MAX patches others have made for sound and video processing would be a good first step. From these, developing a list of timbral and visual extremes to serve as programming goals for the project would come next. Then, perhaps brainstorming about how to best graphically display these parameters for the user or player. Implementing the hardware to the typewriter would probably be most useful before coding the patch, as it is good to be able to test the transformations on the actual instrument while writing in MAX. Once the patch and physical instrument are both robust, the final step would be writing a short sample to adequately display the full depth of possibilities.