Critical Response #2 |
Februrary 20th, 2023
Music 356 / CS470, Stanford University
This time reflecting on human in the loop AI design, I really want to examine the ethical dimension of this design philosophy. I argue that human in the loop design does not inherently make some system or design more ethical, but that it instead simply allows for a human’s ethics to be projected onto the system. I think that when analyzing human in the loop design, we often do so from the shoes of ethical people, and project this experience onto our analysis. We focus on the fact that human in the loop design allows for the counteracting of potential biases present in raw data, on the fact that this design allows for an increased level of transparency. I’d like to make clear that personally I really believe in human in the loop design. That being said, I've reacted to these articles in two of Ge’s classes already–this time I want to look at the other side. How can human in the loop design create increased harm?
The obvious first answer is people creating some design adding their own biases into a system. The data we feed to AI systems is inherently biased simply because of the fact that everything and everyone in our society has bias. It is simply impossible to collect data in a bias-free way, and one of the ideas of human in the loop design is to use a human along the way who can ensure that none of these biases present in the data has the chance to meaningfully affect a system. This is in theory a fantastic thing, but the problem is, every single human is biased. These biases may be harmless in many applications, systems like our genre classifier could be theoretically impacted by biases in the human labeled training data, but there’s really no negative externality to misclassifying some music. These biases coming from a human though could have drastic ramifications if used in areas like the criminal justice system. In many ways this is a very scary issue, if one of the people responsible for interfacing with some important societal system has biases, they could easily intentionally or unintentionally project these biases into the system to catastrophic results. Human in the loop designs can only be as un-biased as the least biased person, and unless we’re getting babies to run our algorithms, there’s really no way to remove bias from these designs. (Although to clarify, non-human in the loop designs also suffer from this, human in the loop designs are just especially susceptible because a nefarious party could very directly inject bias into a system.)
The second, more minor, kind of a half answer is about transparency. Now in general, increased transparency for our algorithms is something which I think is good, but to facilitate a more interesting reflection, I’m going to reflect on the bad parts of transparency. Human in the loop AI designs allow for far more transparency than the alternative, because not only can we see into the process itself, but we have direct control over it. The main pitfall here again comes in the form of nefarious actors. Any system must be designed with the consideration that there are bad people, who will use anything they can to their advantage. When some kind of system is transparent it allows for bad people to analyze how it works and take advantage of it. As AI is applied to more and more of our day to day lives, it could be extremely damaging to have powerful people who know exactly how the system works, and how to take advantage of it.
TLDR: Bad people exist everywhere, and will take advantage of any amount of human in the loop interactivity with AI to meet their ends.
1. High level video game esports AI to use as practice, being able to train an opponent and then practice against it, or potentially use it as a teammate in casual play. (Smash ultimate kind of has this and it’s very cool!)
2. Route optimization at Stanford, but you can specify certain places you like and dislike walking through each time you walk a route.
3. Planning out assignment work
4. Going from an aesthetic design to a functional blueprint (for theater sets).
5. Planning my weekly destiny 2 grind.
6. Picking what dining hall I want to go to.
7. Working through a list of things I need to remember to buy, it could find the things that match my description and then I can sign off on actually purchasing them.
8. System that finds the ideal time of day to remind me to do things.
9. An automatic meeting scheduler.
10. Meme search engine that can find me the meme I’m thinking of based on conversational context and some awful provided description.