Here is my 220c Wiki page!
I am working on a spatial-dispersed-narrative-woodworking project. It will consist of a bunch of handmade resonant wooden boxes with transducers inside, dispersed in space. I'm imagining voices coming resonating through these boxes, having looped, scripted conversations, and letting people wander through it at leisure. TBD.
Current Materials (04.08):
- x1 DTA-1 mini-amp
- x2 DAEX25FHE-4 transducer
- x4 DAEX25 transducer
- x2 HDN-8 transducer
- x1 DTA-120BT2 mini-amp
~ issues ~
- wireless boxes? how to get amps/batteries/sound-source into boxes as small as 4" x 4" x 4".
~ goals ~
- preliminary sound design
- preliminary testing of resonances with current box model
This week I got the wireless, bluetooth-connected, battery-powered transducer setup working. Now, the ensuing steps consist of:
(a) installing semi-permanently in closed boxes,
(b) scaling / figuring out a multi-channel bluetooth array, and
(c) sound design/composing.
My current, working setup consists of the following:
- AMP: DAMGOO Audio Amplifier Board with Bluetooth  ... $12.99
- TRANSDUCER: DAEX25FHE-4  (x2) ... $11.29 each
- BATTERY: TalentCell Rechargeable 12V 3000mAh Lithium ion Battery Pack  ... $24.79
- BOX: Bought at Goodwill, approx. $10.00
- TOTAL $$: $70.36 (not including tax/shipping costs/etc.)
Clearly, this is more expensive than is optimal. With 8 boxes, the project quickly becomes very costly ($500 - $600). Besides not including tax/shipping costs, this also doesn't include soldering, disposable batteries, nails/screws, and woodworking costs.
For now, however, I want to get a single box working so I can begin drafting the idea. As I expand, however, I'll continue to look for better, cheaper solutions for wireless setups.
In terms of installation, there are already issues fixing the transducers in the box sturdily and non-permanently, so I can continue to adjust the positions and use different materials. The issues are mostly frustratingly simple – the bores are too small in the transducers for screws that are long enough to fully pass through the wall of the box, for example. Or the box is too small to fit a screwdriver into at the right angle. My next-door neighbor, a woodworker, made the holes in the transducer larger for longer screws. A hacky solution, but TBD.
Another solution would be using different amp/box combinations. I obtained several boxes from Goodwill of various sizes, from about the size of a shoebox to a miniature chest of drawers about 2' by 2'. I'm planning on testing the heavier transducers with the larger boxes, but each shape has different installation affordances and difficulties.
Ideally, I'd be able to take a sculptural approach to these objects, observing what shapes lend themselves to certain resonant properties, and making objects that correspond with the sound in some way. For this project, however, I'm more laying the groundwork and doing some of the creative research into how this system can work.
Regarding composition, there are two discrete projects I'd like to work on. Again, for this quarter, I need to cull my vision a little and focus on a single one. The first is the script idea. My vision is of a dispersed narrative, recorded voices emerging from objects that stand in for people. I'm imagining a slightly eerie effect, a collection of inanimate objects echoing of human presence. I'm imagining it being part spatial sound-art composition, part experimental storytelling. I would want the sonic quality of the installation to be essential. The scripts could be scored so that they converge, swell, and overlap in compositionally interesting way. They could be processed so that particular timbres and resonances are highlighted, (taking inspiration from Luc Ferrari's Presque Rien, for example ). At the same time, the scripts would interweave and overlap semantically, interacting and creating 'nonlinear' stories. (Lucky for me, this part is up to my collaborator, a writer).
The second idea is simply taking advantage of the unique timbral/resonant character of the boxes and the affordances of point-source multi-channel setup (no ambisonics, just speakers in space) and writing a piece for the setup. From preliminary experiments, percussive sounds work particularly well with transducers in resonant spaces, and pure tones playing off the particular resonances also are effective.
So – this week I am traveling. I'd like to focus on the compositional aspect and start writing sketches for the setup, which I can test when I'm back.
This week, my collaborator and I recorded seven scripts she had written, focusing on the circular-narrative aspect. I put them into Reaper and processed them binaurally, so that each script was evenly spaced around the listener's head. The "cocktail party effect" was enormously successful, especially with some spatial reverb. It really sounded like being in a loud, reflective, indoor space full of people. Using the IEM plugins, I was able to artificially 'navigate' the space. Some conversations came in to focus as others receded into the wash, exactly how I'm imagining navigating the conversations will be like IRL, via boxes. Overall, enormous success re: proof-of-concept with looped, circular narratives, happening concurrently, dispersed in space.
Examples linked below [TBD]
Continuing to work with the sonic element this week, I am playing with the semantic/sonic layering. This is an idea I'd like to try to clarify a bit more, but one that seems particularly rich to explore in this setup.
As I see it, the layers are:
- The (semantic) content of conversations between two or more people – what are they talking about?
- The (sonic) content of the individual words – alliteration, repetition, rhyming, stuttering, pitch, intensity, emotion, etc.
- The (semantic) content of many conversations occurring at once (the idea of a dispersed narrative, a story that can be pieced together by walking through the space and eavesdropping on each conversation)
- The (sonic) content of many conversations in space – something to be composed: everyone suddenly falling silent, or saying the same word at once in different contexts, or gradually moving to whispers, or getting mad at similar points, etc.
- The conceptual or philosophical framework of disembodied voices in space post-COVID. What echoes remain in abandoned public spaces? Does sound continue to reverberate in ways we cannot hear? Can we eavesdrop on the past? How do we piece together our notions of reality from fragments of experience?
I'd like to play with each of these elements, and their relationships, in the composition of this piece.
The two semantic components are relevant to the script writing. My collaborator and I are discussing how to frame the conversations and how to effectively approach the idea of a dispersed narrative – something not too obvious, but with enough connections that the listener can put together the pieces and pick up on repeated bits of information.
The sonic element is something I anticipated composing while the scripts were being written, but think it'll be much easier, more effective, and more controllable to do the effects via digital editing. For example, I can rearrange the audio so that a word that appears in each script is said at precisely the same time by all parties. It requires some editing to smooth it out, so that the cuts aren't abrupt or harsh, but it definitely seems feasible.
What I'm most excited about right now is the idea of using concatenative synthesis as a tool to simultaneously play with both the sonic and semantic content of conversations. Concatenative synthesis uses a corpus (a large database of audio) and a target (a single audio file), and selectively chooses fragments of the corpus that most closely match the target according to parameters of spectral content, dynamic contour, length, and pitch to create a collage-type output. It is a very good tool for creating chopped audio that matches the contour and cadence of a vocal sample. I experimented this week with using the seven first-draft scripts as the corpus, and using each one as the target. By tweaking certain parameters, I could selectively focus on specific timbral elements from the corpus, to make the output sound more whispery, choppy, delicate, or resonant.
I want to continue to play with concatenative synthesis to erode and replace the semantic aspect of the conversations and disambiguate the sonic and semantic vectors of the composition. As a compositional technique, this draws the audience's attention to listen to a crowd speaking nonsense with the same lilt of avid conversation, to subtly control the aggregate spectral content of the piece, and highlight specific conversations or moments by selectively removing meaning.
This week, I used Audioguide by Ben Hackbarth, an extremely powerful and versatile tool. For real-time concatenation and interpolation over time, however, I'm looking into tools like MuBu and FluCoMa. Next week we will write the next round of sketches and continue playing with compositional transformations.