Reading Response #7

From Chapter 7 of Artful Design, I would like to respond to Principle 7.10, which states: PUT HUMANS IN THE LOOP! This is definitely not my first time learning about this concept. In fact, the idea of human in the loop is currently being heavily emphasized in the human-computer interaction field, especially on applications related to artificial intelligence. It is essential to have humans involved in the design process to harness the power of the human (Principle 7.9) and, more importantly, understand human better.

Recently in my HAI (HCI plus AI) class, the professor has introduced an online game called Quick, Draw! which, similar to the ESP game, also belongs to games with a purpose (GWAPs). This game was developed by Google and the player, instead of another real human, draws to the machine. The machine then uses a neural network to guess what the player means, and the drawing is fed back to the network for training better next guesses. In this way, people have created the world's largest doodling data set by simply playing, and our drawings, someday, may be used in the process of identifying others' doodles. The technology helps form a social cycle where people do not and even need not know who else has contributed to this recognition.

Apart from recognition, there are plenty of user interfaces that require humans in the loop as well. To illustrate, voice user interface (VUI) has been applied to most of the customer services, but the user experience now is quite unsatisfying as VUI designers have their own user model which is far different from actual users' mental model. VUIs tend to give machinelike options and responses where we are able to figure out the possible flowchart behind, yet no one speaks in that manner in real life. Another point is that humans communicate with the Cooperative Principle, which is to say that we make as many inferences as possible to advance the conversation. This principle is hard for the machine to learn, but if we can use a similar approach as GWAPs to engage people to identify these inferences, we would probably enjoy a more conversational VUI in return.

Along with the rapid development of artificial intelligence, my HAI professor asked about the future of AI throughout the course. Minerva, a large language model, has already been able to solve quantitative reasoning problems with nicely formatted Latex. It impressed me indeed and made me believe that AI is capable of dominating the rational side of work (Principle 7.11A). However, what they can never replicate is our experience, memory, and reflection (Principle 7.11B), so that we should really spend more time focusing on these emotional interactions.