Critical Response 2 - Humans in the Loop

By: Dominic DeMarco

Part 1: Takeaway

I found there to be two key takeaways from these readings that go hand-in-hand: we shouldn't search for the "perfect algorithm," and we shouldn't treat humans like oracles. At first, these may seem like rather different concepts, but there is a solid connection. A silver-bullet, red-button-esque algorithm may sound ideal, but such an algorithm is often opaque and pre-configured, giving users limited insights into its inner workings and limited options to adapt the model to suit a novel situation. In a sense, these "perfect algorithms" are designed to minimize the impact of humans on the output. However, the algorithm needs training data, which does come from humans. To acquire this in the most efficient way, such an algorithm would want to be as concise as possible in its human interface - asking perhaps for classification labels or yes/no responses. In other words, a red-button algorithm would want to use human oracles. From this, we can see that AI designs that don't prioritize keeping humans in the loop can often fall into this pattern (or they may exclude humans entirely).

We could now wonder why this established convention of humans-as-oracles is harmful, and the "Power to the People" article does a good job outlining this. It's demeaning to the humans, feels tedious, is error-prone, and disincentives humans from helping machines learn. I found it fascinating to read about the biases that humans have when helping machines learn (such as tending towards positive feedback and disregarding instructions to try to signpost), and how, when incorporated into the training process, these tendencies can be levered to increase the algorithm's accuracy. This suggests that by studying the way humans learn and teach, we can be better at teaching AI and machines.

Still, there is the underlying transaction of learning that is happening - people are spending their time and effort into training machines that may one day take their jobs or eclipse them at what they're teaching in the first place. Could one day, the job of "teacher" be reserved for teaching machines, while machines teach our children (or our children's children) what we would teach them today? The idea that machine learning could reinvent traditional learning scares me a little bit and makes me worry that people will be disincentivized from helping AI reach its potential.

The human-in-the-loop tenant, if widely adopted, would make AI seem like less of an adversary and more of a helpful companion. But it would need to be implemented in a way that doesn't belittle humans or make them feel like a token component of a much bigger system that really doesn't need them. It also requires that the AI system be transparent not only to the designers and technical experts, but also to everyday, regular adopters. As such, in addition to challenging the notion of human-oracles and silver-bullet algorithms, we must not use human-in- the-loop ideas to create anti-humanist systems masquerading as humanist ones.

Part 2: Interactive AI Activities

1: OMR (optical music recognition), especially on handwritten manuscripts, could leverage image recognition AI to recognize many symbols while asking the human in the loop to examine challenging cases or patterns.

2: One could imagine AI-powered chore manager that could help you schedule mundane tasks like doing laundry cleaning toilets, it would generate schedules for you and learn from your feedback if the schedule works for you.

3: An AI chatbot-like therapist could learn from your experiences and a large medical corpus to provide care at a much lower cost to anyone who might need it.

4: I can imagine leveraging AI + architecture so that an architect and machine can co-create inventive new designs and buildings.

5: It would be amazing if an AI could be trained to translate visual models in your brain (imaginations, dreams) into a 3D rendering.

6: If AI could be trained to create games instead of just solving them, it could be leveraged to challenge humans with an endless stream of puzzles!

7: We could create the ultimate dystopian AI by having an AI-powered NFT trading bot that asks the human for input and does nothing with it.

8: An essay-writing or design-doc synthesizing AI tool could eliminate tedious writing that goes hand-in-hand with large corporations by allowing the user to provide the actual product description and to iterate over verbage to round out the design-doc.

9: One could imagine a widely-accessible climate model that users can interact with to see how their homes could be affected in the future and suggest policy changes to mitigate harms.

10: This is a terrible idea, but AI could be used to train people how to drive or do other high-risk activities - it would be able to retake control of the vehicle if the trainee gets themself into a sticky situation. This couldn't possibly go wrong.