Reading Response #9
to "Humans in the Loop" and "Experimental Creative Writing"

Sam L.
11.21.21
Music 256A / CS476a, Stanford University


Reading Response: You Can't Always Get What You Want and You Probably Shouldn't Anyways

The idea from this week's reading that I will be responding to is the central question of the Humans in the Loop article:

    How Do We Want to Live with AI?

As an AI student and practitioner, I was first drawn to the field by the question of "what can AI do?" and the seemingly unending possibilities it evoked. This was the headspace that I found myself in for years to come as I felt like there was always more for me to learn how to master and apply to the problems I found interesting. Now that I have finally been able to engage in a dedicated study of AI as an MSCS student and explore the far reaches of the field, I have realized that the question above is far more consequential. I actually first read the Humans in the Loop article earlier this year when I first talked with Ge about my research interests, so I've been thinking about these ideas for a while. I certainly don't think that I've gotten any closer to the answer, but at least I feel like I'm asking the right question.

The most recent answer I've found to the question of "how do we want to live with AI", or its close cousin "what do we REALLY want from AI" is based on the discussion of eudaimonics from the final chapter of Artful Design, but is still littered with problems. The thing about people that has to be kept in mind is that we don't all want the same thing - there is no one size fits all. So when answering the question of what we want from AI from a societal level, we can't take a prescriptive approach. The most flexible definition I've found so far is that the best AI is one that can automate everything we don't find eudaimonic at a personal level. The main problem with this defition is that it puts AI in a position of servitude, and if we truly find ourselves capable of creating general intelligence, that is not what a benevolent creator would do, unless there is some way to design AI where our desires are the desires of the AI itself - we want an AI to serve us, and that is what it truly wants too.

All this philosophizing about the future of AI makes me think of Nick Bostrom and his book Superintelligence. One of the things he mentions early on is that humans have a habit of inventing things and then learning how to live with them - and that's very much a tenet embraced in the pages of Artful Design. Unfortunately, we don't have this luxury with AI - the threats of technological misuse are too great to leave to chance and many of them might be too late to solve by the time intelligence emerges. Stewarding the future of artificial intelligence is more important now than ever, where it feels like true breakthroughs may be within reach, or we might just be heading for another AI winter, who knows.