MUSIC 356 - ETUDE #1

"Poets of Sound and Time"

Kunwoo Kim

 

POEM #1 - "I sometimes dream of yesterday and remember of tomorrow."

(TUESDAY - THIS ONE!!)

 

POEM #2 - "am-bee-tea-eye"

 

<Chuck Codes & Streaming assets>

[DOWNLOAD_LINK]

 

<Reflections>

I sometimes think about how I “dream of yesterday, and remember of tomorrow” in a way of avoiding regrets in the past, and fear for the future. After this set of daydreaming, the only conclusion I am left with is that the present is the most important as it is the only moment that I have control over.

I wrote the poem after reflection, and in such a way so that word2vec could play a role easily. My other poem, “am-bee-tea-eye” seemed to be chaotic and whimsical, so I took on a more serious tone with subtle nuances for this one.

Then I created the aesthetics. The granular synthesis chords, melodies, typewriter sounds, rains, and the timing of the presentation. The process was a blast, as I got the feeling of capturing the emotions I had when writing this poem. However, as antonyms play a major role in this, the horrors loomed over when I found out that word2vec wasn’t the medium I expected it to be. I could not find a good way to have word2vec find the right (or even just close) antonyms.

I had to make design choices on whether to emphasize the medium of word2vec, or the aesthetics of the poem. The two did not create a good synergy. I made more choices for the ladder, decreasing the roles of word2vec. For example, rather than finding opposite vectors for antonyms, I gave five antonyms myself, where word2vec can search for similar word. My initial thought was that word2vec would expand my regrets and fear, leaving room for me for a wider reflection, yet it was a little bit frustrating as I felt like it was harming my aesthetics. No matter how many iterations I went through, I felt like I failed to justify the use of word2vec as none surpassed the quality (or even create happy accidents) compared to my original poem.

I finally found the right balance when I played around with the words a lot. Some words had more similar neighbors than others, so I deliberately chose my word pool so that it does not give results like "shwa" , "oooooooooooooooooooooooo", or some websites. (I could not use the filtered one, since I wanted to use the 50 dimensions one). I'm not sure if it's the computing, but the sounds got clipped at places, which I could not find a resolution for. I am quite satisfied with the overall result! Word2Vec actually gave some interesting adjectives that I did not think about at first!

Designing with word2vec gave me a bit of anxiety, as I love fine-tuning aesthetics with detailed and intentional design choices. However, unless it was chaotic and whimsical, it was hard to justify the use of word2vec, as it was incredibly difficult to capture the subtle aesthetics I imagined. I feel like over the course of this class, I feel like I need to learn new methods of design processes with AI. I think AIs are really good at what humans are bad at, but also really bad at what humans find ease in.

Is designing backwards from a human-envisioned experience possible with AI? Does the inspirational North Star have to be changed because we’re riding a ship paddled by AI? Or should we be looking for a North Star that only AI-paddled-ship can reach?

 

<Acknowledgements>

Thanks to Ge and Yikai for creating the tools and ideas for this etude! Thanks to the 356 classmates for all the inspiring discussions in class!