Here is my 220c Wiki page!
I am working on a spatial-dispersed-narrative-woodworking project. It will consist of a bunch of handmade resonant wooden boxes with transducers inside, dispersed in space. I'm imagining voices coming resonating through these boxes, having looped, scripted conversations, and letting people wander through it at leisure. TBD.
Current Materials (04.08):
- x1 DTA-1 mini-amp
- x2 DAEX25FHE-4 transducer
- x4 DAEX25 transducer
- x2 HDN-8 transducer
- x1 DTA-120BT2 mini-amp
~ issues ~
- wireless boxes? how to get amps/batteries/sound-source into boxes as small as 4" x 4" x 4".
~ goals ~
- preliminary sound design
- preliminary testing of resonances with current box model
This week I got the wireless, bluetooth-connected, battery-powered transducer setup working. Now, the ensuing steps consist of:
(a) installing semi-permanently in closed boxes,
(b) scaling / figuring out a multi-channel bluetooth array, and
(c) sound design/composing.
My current, working setup consists of the following:
- AMP: DAMGOO Audio Amplifier Board with Bluetooth  ... $12.99
- TRANSDUCER: DAEX25FHE-4  (x2) ... $11.29 each
- BATTERY: TalentCell Rechargeable 12V 3000mAh Lithium ion Battery Pack  ... $24.79
- BOX: Bought at Goodwill, approx. $10.00
- TOTAL $$: $70.36 (not including tax/shipping costs/etc.)
Clearly, this is more expensive than is optimal. With 8 boxes, the project quickly becomes very costly ($500 - $600). Besides not including tax/shipping costs, this also doesn't include soldering, disposable batteries, nails/screws, and woodworking costs.
For now, however, I want to get a single box working so I can begin drafting the idea. As I expand, however, I'll continue to look for better, cheaper solutions for wireless setups.
In terms of installation, there are already issues fixing the transducers in the box sturdily and non-permanently, so I can continue to adjust the positions and use different materials. The issues are mostly frustratingly simple – the bores are too small in the transducers for screws that are long enough to fully pass through the wall of the box, for example. Or the box is too small to fit a screwdriver into at the right angle. My next-door neighbor, a woodworker, made the holes in the transducer larger for longer screws. A hacky solution, but TBD.
Another solution would be using different amp/box combinations. I obtained several boxes from Goodwill of various sizes, from about the size of a shoebox to a miniature chest of drawers about 2' by 2'. I'm planning on testing the heavier transducers with the larger boxes, but each shape has different installation affordances and difficulties.
Ideally, I'd be able to take a sculptural approach to these objects, observing what shapes lend themselves to certain resonant properties, and making objects that correspond with the sound in some way. For this project, however, I'm more laying the groundwork and doing some of the creative research into how this system can work.
Regarding composition, there are two discrete projects I'd like to work on. Again, for this quarter, I need to cull my vision a little and focus on a single one. The first is the script idea. My vision is of a dispersed narrative, recorded voices emerging from objects that stand in for people. I'm imagining a slightly eerie effect, a collection of inanimate objects echoing of human presence. I'm imagining it being part spatial sound-art composition, part experimental storytelling. I would want the sonic quality of the installation to be essential. The scripts could be scored so that they converge, swell, and overlap in compositionally interesting way. They could be processed so that particular timbres and resonances are highlighted, (taking inspiration from Luc Ferrari's Presque Rien, for example ). At the same time, the scripts would interweave and overlap semantically, interacting and creating 'nonlinear' stories. (Lucky for me, this part is up to my collaborator, a writer).
The second idea is simply taking advantage of the unique timbral/resonant character of the boxes and the affordances of point-source multi-channel setup (no ambisonics, just speakers in space) and writing a piece for the setup. From preliminary experiments, percussive sounds work particularly well with transducers in resonant spaces, and pure tones playing off the particular resonances also are effective.
So – this week I am traveling. I'd like to focus on the compositional aspect and start writing sketches for the setup, which I can test when I'm back.
This week, my collaborator and I recorded seven scripts she had written, focusing on the circular-narrative aspect. I put them into Reaper and processed them binaurally, so that each script was evenly spaced around the listener's head. The "cocktail party effect" was enormously successful, especially with some spatial reverb. It really sounded like being in a loud, reflective, indoor space full of people. Using the IEM plugins, I was able to artificially 'navigate' the space. Some conversations came in to focus as others receded into the wash, exactly how I'm imagining navigating the conversations will be like IRL, via boxes. Overall, enormous success re: proof-of-concept with looped, circular narratives, happening concurrently, dispersed in space.
Examples linked below [TBD]
Continuing to work with the sonic element this week, I am playing with the semantic/sonic layering. This is an idea I'd like to try to clarify a bit more, but one that seems particularly rich to explore in this setup.
As I see it, the layers are:
- Semantic: Individual conversations between two or more people – what are they talking about?
- Sonic: Individual words – alliteration, repetition, rhyming, stuttering, pitch, intensity, emotion, etc.
- Semantic: Many conversations occurring at once – a dispersed narrative, a story that can be pieced together by walking through the space and eavesdropping.
- Sonic: Many conversations in space. To be composed: everyone suddenly falling silent, or saying the same word at once in different contexts, or gradually moving to whispers, or getting mad at similar points, etc.
- The conceptual or philosophical framework of disembodied voices in space post-COVID. What echoes remain in abandoned public spaces? Does sound continue to reverberate in ways we cannot hear? Can we eavesdrop on the past? How do we piece together our notions of reality from fragments of experience?
I'd like to explore each of these elements and their relationships in the composition of this piece.
The two semantic components are relevant to the script writing. My collaborator and I are discussing how to frame the conversations and how to effectively approach the idea of a dispersed narrative. We want something that's not too obvious but with enough connections that the listener can put together the pieces and notice repeated bits of information.
The sonic element is something I anticipated composing in tandem with scriptwriting, but think it'll be easier, as effective, and more controllable to do the desired conversation-composition via digital editing. For example, I can rearrange the audio so that a word that appears in each script is said at precisely the same time by all parties. It will require smooth editing so that the cuts aren't abrupt or harsh, but seems feasible.
What I'm most excited about right now is the idea of using concatenative synthesis as a tool to play with the relationship of the sonic and semantic. Concatenative synthesis uses a corpus (a large database of audio) and a target (a single audio file), and selectively chooses fragments of the corpus that most closely match the target according to parameters of spectral content, dynamic contour, length, and pitch. The output is a collage-like soundfile that mimics the target from the material of the corpus. It is a very good tool for matching the contour and cadence of a vocal sample. I experimented this week with the first-draft scripts as the corpus and target(s). By tweaking certain parameters, I could focus on specific timbral elements from the corpus to make the output sound more whispery, choppy, delicate, or resonant.
I want to use concatenative synthesis to erode and replace the semantic aspect of the conversations and disambiguate the sonic and semantic layers of the composition. As a compositional technique, I could draw the audience's attention towards a crowd as an abstract sonic object without any meaning, then highlight certain semantic relationships between words occurring at once. The same perceptual experience of avid conversation would remain while the words themselves would disappear. I could subtly control the aggregate spectral content of the piece, or highlight one conversation at a time by selectively removing meaning of the others.
This week, I used Audioguide by Ben Hackbarth, an extremely powerful and versatile tool. For real-time concatenation and interpolation over time, however, I'm looking into tools like MuBu and FluCoMa. Next week we will write the next round of sketches and continue playing with compositional transformations.
This week, my collaborator and I honed on the next draft of the scripts. We discussed the overarching theme of absence, and the many different takes we could have on that premise. The "nonlinear narrative" is realized by a somewhat abstract narrative of exploring an almost philosophical concept from many different perspectives. Each character will be individually developed, complex, and will take radically different approaches to the central theme. It will ideally leave the listener with a sense of absence, an open-ended narrative of loss and lack.
Other updates: temporary working title of "Horror Vacui", an art history term describing fear of empty space. The "scary" connotation of horror makes me double take, though. So maybe – sticking with the Latin to preserve the reference to visual art history – "overwhelming" (videam), "sudden" (subita), "human" (hominum), "voice" (vox), "sound" (sana), "public" (publicae), or "common" (communia). Or not, and I can think of a slightly less pretentious title. TBD.
Finally, for the real-time sound synthesis, I'm still trying to figure out FluCoMa, which has been a bit harder to pick up than anticipated.
- 2-3 new drafts of scripts are done!
- FluCoMa is still giving me trouble.
- We decided to focus on finding objects – instead of making boxes – both for aesthetic and practical reason. So I am currently looking into dirt-cheap second-hand stores in LA where I can find things like resonant old shoes, containers, tins, drawers, bowls, cups, chairs, etc. Each object will be paired with a script, and have the transducers attached via double-sided mounting tape. The work of next week!
So this weekend, we anticipate (a) finding the objects and paring them with scripts, (b) beginning to re-record the scripts with a higher variety of voices, and (c) beginning to compose out transformations with a (working!) FluCoMa or other concatenative synthesis engine.
Objects found! Free Craigslist is my best friend. Found an amazing pair of operating speakers, a file cabinet, a mysterious metal bin that looks like it housed hay or something, among others within a few blocks of my house. I began testing with transducers this week, ordered a few extra battery-amp-transducer combinations, and started recording scripts! We decided to use the following objects for the final version:
- a pair of speakers
- a chair
- a lamp
- a stack of pillows
- a pair of shoes
- a file cabinet
- a metal thing filled with food
Also, I'm sticking with audioguide. The nuance of the program, as well as the time gained by not having to learn a brand new program, far outweighs the relatively subtle benefits of working with a real-time concatenative synthesis engine like FluCoMa. Instead, I'm going to concatenate the scripts phrase by phrase, by hand, to create the interpolation from meaning to sound I'm aspiring towards.
Continued recording scripts. So far we've recruited 14 (!) friends to play various roles, although we have various repetitions of voices, most glaringly our (Angelica and my) own. We also recorded a FaceTime conversation we had about the piece to add to the script, a sort of meta-commentary on the arrangement, lifting back the curtain Oz-style, and imbuing a bit of lightness and humor into otherwise pretty heavy material. It also is discussing the metaphors of each object, making some of them explicit for anyone who decides to listen in...
I installed the remainder of the transducers this weeks, with various pros/cons. The file cabinet sounds great, boomy and full, fun to blast music through, but it can be really difficult to discern words or quiet sounds because of the noise of the metal and the boominess. The speakers, on the other hand, are a little too clear compared to the other objects. The pillow is muffled... duh :/ Not sure how to get around that. But the metal container is great, the shoes are much more resonant and clear than I expected, and the chair and lamp are decent. With more time, I would love to just collect a junkyard's worth of objects and test every conceivable configuration of transducers and material, shape, resonance, and sound-type. Maybe I'll make a found-object instrument series one day...
Also this week we finished recording the scripts, and I tested them in virtual space, and they sound fantastic. I'm very, very excited about them and proud of the work by my collaborator Angelica!
Crunch time. This week I wrote a series of run files for Audioguide, each within a sound-space I wanted to go through. I had run files titled "whispery", "mumble", "gasp", "almost correct" and more. I tweaked parameters for a while to get the sound I wanted, then processed all the scripts with the same settings. So now I have seven versions of seven scripts, totalling 49 tracks to mix.
I then arranged a rough trajectory through each of the sound spaces so that the scripts overlap in their transformations, without being precisely synced, to give an organic, gradual transformation over time. I then interpolated phrase by phrase for each of the scripts over 20 minutes, a process that took hours and hours and hours.
Then I took all the objects to the gallery space (owned by a friend's mother) and arranged them in space. I connected my laptop to each bluetooth amp, for 7 total connections. I created an aggregate device in Audio Midi Setup. Then I hooked up each track in Reaper to a stereo hardware output corresponding with the channels on the aggregate device. I tried playing back the Reaper file and... horror. The Bluetooth was glitching like crazy. You couldn't understand the words, and it sounded bad. I realized I had only tested a maximum of five simultaneous bluetooth devices hooked up to my computer, not seven (14 channels). So for the final version I had to cut two scripts and two objects – the chair and the lamp. Each of the scripts was fantastic and it was a real shame to have to alter it at all. But five scripts worked flawlessly.
Some small issues to note for the next iteration. Mixing the levels was really difficult. Next time I would like to spend a lot more time EQing and making each script really as clear as it can get on the object. I also realize the walkaround format, plus the gradual erasing of meaning over time, makes it so that it can be pretty difficult to understand the scripts, which are each complex and rich in their own right. But overall I’m very happy with how it turned out!! To be honest, like with any project like this, it's given me about 100 new ideas for projects that could be made from each of the components – transducers, bluetooth arrays, found-object instruments, concatenative synthesis.
For the final deliverable, I recorded a video of me walking through the space with a four channel mic, rendered the audio to binaural, and put it online. So you can hear the piece in space from my perspective as I wander through. But ideally, everyone would be able to wander through at their own leisure.
The final piece is titled "Not Here" and is on YouTube at this link: https://www.youtube.com/watch?v=4Vd8l0nKVX8
Thanks to Chris, Julius, and Scott for a great quarter!