The Processed Typewriter

From CCRMA Wiki
Jump to: navigation, search

Other than the human voice, musical instruments convey primarily abstraction through sound content. We interpret these sounds as music to varying degrees, but if one were to step away from the cultural associations, the noise would remain highly ambiguous. With a typewriter the sounds inherent in the machine's use also contain linguistic meaning. Having this added layer to work with, a composer could pair the text and the sounds in a multitude of ways, even utilizing the ambiguity of semantic meaning with the ill-defined meaning of typewriter sounds. For this project I am specifically thinking towards a performance in the late spring during a residency with famed soprano Tony Arnold. Rather than a typical accompaniment for a solo soprano piece, like as a piano, it would be much more interesting and musically fertile to have her singing lyrics which are actively being typed in the background. Not only is the text being transformed into sound through the vocal line, but also the hammering away of the typewriter. Furthermore, these sounds and the images of the text appearing on the page would be processed, enabling a wide range of articulations, imagery, references, and audio sculpting.


Preliminary Designs/Drawings

Before photos of the 1950's manual typewriter:

Media:001.jpg

Media:002.jpg

Media:003.jpg

Media:004.jpg


For Performance in a Concert Hall Diagram2sm.jpg

For Performance in an Art Gallery Diagram4sm.jpg


Intermediary Photos

Updates from November 14, 2014

Media:ProjectPlan.jpg

Media:01webcam_mic.jpg

Media:02webcam_mic.jpg

Media:03replacement_ribbon.jpg

Media:04replacement_ribbon.jpg

Media:05test.jpg

Media:06test.jpg


Examples of Similar Work

The Interactive Typewriter

D.O.R.T.H.E.

Automating a mechanical typewriter

USB Typewriter

Julian Koschwitz's Typewriter Installation

Instrument for Unsent Letters


Three Lists of Project Goals

a) Things that need to be completed for a minimal viable product

The most basic goal of mine with this project is the sound processing of the typewriter. As I mentioned above, it needs to be incredibly nuanced and yet allow for a great deal of variation timbrally. At this point, my concept for the compositional form is 32 microludes (short, but intense musical gestures lastly 12-30 seconds each). As the choice in form might suggest, I seek to make each microlude distinct musically, but also succeed both as individual fragments and in relation to the larger set. To achieve this I plan to create 32 preset sound presets, controls and settings already adjusted beforehand to facilitate use. During the performance rather than worry about having to actively (and quickly) modify the various parameters to correctly fit each microlude, I can simply click through the presets. This will allow me to replicate the sounds exactly from performance to performance and also cut down on potential user errors. Parameters I am considering right now include filters (low-pass, high-pass, band-pass), reverb, delay, vibrato, resonance, pitch modulation, multiphonics, and so on. On the technical side of things, the patch would need to be programmed in MAX/MSP and the hardware would need to be set up/wired up accordingly.


b) Things that I want to have done by the final deadline

In addition to all of the sound processing described in (a), I hope to create the same level of nuance and variation in the visual domain. Also in MAX I would program 32 processing presets for the webcam stream that is taking in video of the page being typed in real-time. It would be great to apply some similar parameters that were used to craft the sound when crafting the video effects. Representing auditory transformations in the visual world will not be a copy in method, but rather a suggested relationship between two modes of thinking. Coordinating and calibrating the numerous presets happening concurrently will require a lot of troubleshooting, but will yield an artistic gesture will greater conviction.


c) Things that would be nice to have if I had unlimited time

As seen in some of the examples of similar works, it would be fantastic for the manual typewriter was automated. After I "performed" the ideal piece on the typewriter myself, if it were automated I could have the dramatic/theatrical element seen in the physical device present for any concert, while ensuring absolute accuracy in execution. This would make it a lot easier for the soprano, knowing exactly how the typewriter part will go, such as a tape part, but with the real process happening on stage. This is similar to a player piano, having rolls that represent a given interpretation played back through the instrument that it was first performed on. This would require a tremendous amount of technical study, parts, and time. Perhaps I could use MIDI values to control the different physical parameters of automation (attack time, pitch being replaced by specific key, etc).

Also if I were given unlimited time I would work on composing the piece and fine tuning the patch within the practicalities of the music. For this piece I am thinking of dichromacy as a basis for representing the audio-visual transformations between the microludes. Dichromacy (di meaning "two" and chroma meaning "color") is the state of having two types of functioning color receptors, called cone cells, in the eyes. Organisms with dichromacy are called dichromats. Dichromats can match any color they see with a mixture of no more than two pure spectral lights. For this work would be interested in red-blue dichromacy. Here is an example of how the two color spectrums would connect the microludes together:

Media:COLOR GRADIENT.jpg

1. Cool Black (#000000) 0% 0% Solo typewriter Media:preset01.png

2. Navy (#000080) 0% 50% Media:preset02.png

3. Dark Blue (#00008B) 0% 55% Media:preset03.png

4. Duke Blue (#00009C) 0% 61% Media:preset04.png

5. Medium Blue (#0000CD) 0% 80% Media:preset05.png

6. Blue (#0000FF) 0% 100% Media:preset06.png

7. Electric Ultramarine (#3F00FF) 25% 100% Media:preset07.png

8. Indigo (#4B0082) 29% 51% Media:preset08.png

9. Patriarch (#800080) 50% 50% Media:preset09.png

10. Vivid Orchard (#CC00FF) 80% 100% Media:preset10.png

11. White (#FFFFFF) 100% 100% Solo voice Media:preset11.png

12. Vivid Raspberry (#FF006C) 100% 42% Media:preset12.png

13. Rubine Red (#D10056) 82% 34% Media:preset13.png

14. Folly (#FF004F) 100% 31% Media:preset14.png

15. Rich Carmine (#D70040) 84% 25% Media:preset15.png

16. Munsell (#F2003C) 95% 24% Media:preset16.png

17. Crimson Glory (#BE0032) 75% 20% Media:preset17.png

18. Ruddy (#FF0028) 100% 16% Media:preset18.png

19. Spanish Red (#E60026) 90% 15% Media:preset19.png

20. Burgundy (#800020) 50% 13% Media:preset20.png

21. Cadmium Red (#E30022) 89% 13% Media:preset21.png

22. Carmine (#960018) 59% 9% Media:preset22.png

23. Red Devil (#860111) 53% 7% Media:preset23.png

24. Sangria (#92000A) 57% 4% Media:preset24.png

25. Rosewood (#65000B) 40% 4% Media:preset25.png

26. Red (#FF0000) 100% 0% Media:preset26.png

27. Rosso Corsa (#D40000) 83% 0% Media:preset27.png

28. Dark Candy Apple (#A40000) 64% 0% Media:preset28.png

29. Crimson (#990000) 60% 0% Media:preset29.png

30. Dark Red (#8B0000) 55% 0% Media:preset30.png

31. Maroon (#800000) 50% 0% Media:preset31.png

32. Cool Black (#000000) 0% 0% Solo typewriter Media:preset32.png


Materials Needed

A speaker for hearing the processed sounds, preferably a stereo PA system

A webcam with an internal microphone

A manual typewriter

A piezo

An arduino and necessary electronic parts for hooking up the piezo and laptop

A laptop computer with MAX/MSP

A means to fasten the piezo inside of the typewriter

A stand or some other fixed structure to secure the webcam in place

Audio cables between the laptop and speaker

Digital projector to see the processed video feed

White, blank letter size (8.5"x11") paper

Replacement typewriter ribbons (if needed)


List of Steps to Achieve the Minimal Viable Product

I predict the most time intensive part of making this idea happen will be creating a patch or several that allow for highly refined real-time processing of the video and sound. It would be interesting if these were both mapped to the same parameters, so that any given transformation in the sound of the typewriter will reliably also transform the text feed in an expressive way. Looking into MAX patches others have made for sound and video processing would be a good first step. From these, developing a list of timbral and visual extremes to serve as programming goals for the project would come next. Then, perhaps brainstorming about how to best graphically display these parameters for the user or player. Implementing the hardware to the typewriter would probably be most useful before coding the patch, as it is good to be able to test the transformations on the actual instrument while writing in MAX. Once the patch and physical instrument are both robust, the final step would be writing a short sample to adequately display the full depth of possibilities.


Presentation (December 03, 2014)

PART 1.png PART 2.png