Grant Bishko

Music 256a: Reading Response

11/21/21

“Humans In the Loop” & “Experimental Creative Writing”

While the reading this week was interesting, I was really fascinated by the two videos: the talk about vectorized words from Allison Parrish and the video demonstration of ISSE cutting and editing sounds.

First of all, Allison’s demonstrations of using technology to create poetry and writing was SO cool. This argues absolutely that we can still use these new technologies to make art without replacing it (which was a concern I have and have talked about in previous reading responses). I was struck by the variety of changes she could make with text to make new amazing things. For example, morphing the Frankenstein to Genesis was simply fascinating. I would absolutely love to study whatever ML (??) algorithm she used to do that. The other phonetic-based one was so cool and how the sound traveled through the mouth.

I was intrigued (confused but also fascinated by) the idea that we can give words numeric values. For colors this makes complete sense to me, as we can also do this with pitches (frequencies). However, how is it possible to give semantics some numeric value, to the point where we can take the average between two words? That isn't phonetic? The example with the average of Night and Day being evening? So cool. I don’t understand how this is possible, but my wheels are turning; how can we apply this to music? What if we gave different intervals or melodies or chords different numeric values (in a vector or something) and changed them mathematically. What new amazing things can we come up with?

As Ge mentioned with the example from class of making legal documents more readable, the gradient is what makes this an interesting idea as a tool. Allison had this gradient when she slowly morphed the frankenstein to genesis, or slowly moved the phonetics through her mouth. How can this also be applied to sounds? Can we design some tool that will slowly turn a given sound into something completely new? Can we do this with voices? Languages? Pop music?

Finally, I really enjoyed watching the video example of the Interactive Source Separation Editor in action. I knew this existed, as I believe we have seen it in use in my Music 220a class, but I never understood how it worked, nor was I able to follow the professor as they did it. This is SO cool. I definitely would like to play around with this software on my own time and see the wonders that it can do for myself.

The moral is, everything we have studied in this class so far has been super interesting to me, and this music/sound/tech field is definitely something I will continue to explore. I look forward to what else I will learn throughout my years at Stanford and at CCRMA.