Earlier this quarter, I attended a “startup retreat” in a group of around 20 Stanford students. During the retreat, we worked on our individual projects and presented what we had been working on at the end. Unsurprisingly, a large number of people worked on an AI project, most of whom were using some OpenAI API to do something they thought was cool. In their presentations, every person was super excited to show off the technical abilities of their creation. But not a single one of them fully explained what the intended purpose of their creation was.

 

In one case, someone was working on a startup project that uses AI to analyze videos and text and be able to summarize emotions through color and point out key themes. He explained in-depth about how he built the system, but there was absolutely no mention of what the purpose of the system was. After additional questioning from other students, we finally understood that he was building a journaling app for mental health. I was absolutely astounded to see that he only seemed interested in the technical aspects of this project and didn’t seem interested in thinking critically about what the interaction with humans would look like and how this could potentially help or harm people. It was particularly upsetting considering that his app is for mental health, something that is very sensitive and should be taken seriously. Soon after, I heard from a mutual friend that he was already beginning beta testing with Stanford students.

 

This unfortunately was not a unique experience for me at Stanford. I have seen countless Stanford startup bros start working on building systems without giving any thought to the impact that those systems might have. I have also seen countless Stanford students take almost every AI class available at Stanford without ever thinking about how they are going to apply the skills that they have learned. I plead with these students to at least begin asking questions about how we want to coexist with AI systems. One question to start with is, do we really want to focus on creating AI systems that replace humans by doing things that humans already do but better, or do we want to create AI systems that aid humans instead? For example, do we really want an AI journaling app that analyzes our every thought and potentially spits out information that causes more harm than good? Or does a better solution maybe involve using AI to help connect people to more reputable mental health resources like human therapists?