Reading Response #9 to "Humans in the Loop" reading + "Experimental Creative Writing" video • “”

Aaron H.
12/3/2022
Music 256A / CS476a, Stanford University


Allison Parrish’s talk was very fascinating, especially the way that using vector dimensionality of words and association results in other words that typically have similar meaning. Who would have thought that an algorithm can add “water” + “frozen” and turn up with the word “ice” or the operation “purple” – “blue” would result in red. It makes me curious how these answers would change based on the context that is given. For example, how would adding culture change some of the results of operating on words? If the data was more poetic or metaphoric, could that change the entirety of association between the words? I could imagine it’s possible to bias the ML algorithm to have a solution to be whatever you want. This also makes me wonder how much the results of these word vectors influence how we think about semantics and the words that we use. This slightly reminds me of what Google currently does where the results that appear first are dependent on the location you reside in. For example, being in a zipcode that is predominantly liberal can result in google searches that are more left leaning and vise versa. I would imagine the search history of those around you may also influence to the search results as well. I’ve personally been grappling with this and wonder if we should be actively pushing back against implementations that encourage group thinks.

I also enjoyed reading Ge’s blog as a continuation of what was discussed in Artful Design. I personally would lean towards the human in the loop approach, specifically because of the research I have conducted with individuals experiencing hearing loss. Individual perception is so heterogenous and specific, I can’t really imagine an algorithm being put in place that fits the needs of every person. It also seems backwards to me to not include the individual you are trying to design for in the loop to make it optimal for their experience. I resonate with focusing on finding ways to use ML and AI to enhance someone’s ability to do something such as creating music. It seems particularly powerful almost like putting brains together, the neural net and the human brain. I think Ge does bring up a good question about whether it’s important to be asking if AI can eventually create music like a human can. I think that forgets the important question about whether it’s a good idea or what do we as a society want that to look like. Personally, I think technology is going to continue to advance, so it’s not a question of when. What’s important to me at least is how do we encourage the people building these new technologies to consider the ethics and the ramification of what they are building.