Reading Response 9

I found the reading on human AI interaction this week to be pretty timely, considering OpenAI released a version of GPT optimized for chat this past week, and people have used it for tasks ranging anywhere from dating advice to running a Linux terminal with internet access. I spent some time playing with ChatGPT and found that the chat context made my experience with it significantly different from my usage of other foundation models in several ways. First, ChatGPT remembers the context of the conversation, meaning natural language requests like “can you make that more verbose” will appropriately return the previously generated response with more complexity. This also meant that I found myself using more pleasantries and abiding by conversational norms — I would ask for the system to “please” do something, or preface requests with “can you…” because of the resemblance to talking to a person. At one point, when I tried using ChatGPT to debug some code and it didn’t work, I even asked “are you sure this works?” (It did work. It was an error on my end.) Second, ChatGPT is good at taking feedback and learning from it. One person published a chat transcript where they use the system to produce a solution for the Advent of Code puzzle [https://gist.github.com/geoffreylitt/6094cdb1efdb990af3da717d7d634065] where the answer is initially wrong, but after being given execution logs, the system correctly identifies the error it previously made and produces a working solution. This is very similar to the interactive source separation tool mentioned in the post — human feedback is incorporated to produce better outcomes. There’s an almost-but-not-quite symbiotic relationship between the user and the machine here, where the human assists the system in producing the human’s desired outcome. Echoing the post’s comments on how this enables more powerful systems, ChatGPT is taking social media by storm thanks to its ability to iteratively change its outputs in response to user feedback. Finally, the following paragraph was written by GPT-3 to summarize my thoughts:

“Overall, ChatGPT’s ability to remember context and respond to feedback has made it more engaging to use than other GPT models. I found myself wanting to continue conversing with it, even when I didn’t have a specific task in mind. This is a marked difference from my experiences with other GPT models, where I would use them for a specific task and then move on. ChatGPT’s ability to maintain a conversation makes it feel more like a real interlocutor, even though it is just a machine.”

And you know what? It’s right about that. I’ve now asked it to “speak to me like a loving parent” and honestly… it’s not too bad.