For the final reading response on Artful Design, I want to respond to Principle 8.5 -- “technology is neither good nor bad; nor is it neutral” -- outlined on pg. 406. The principle discusses how technology cannot be regarded under a neutral lens because it is crafted and used by humans, and thus “its design is a product of choices”. The design choices made as well as the outcomes that stem from these choices cannot be simply left to chance. This made me think of the rapid onset of artificial intelligence and machine learning technology, especially of chatbots and human recognition/classification systems. For example, a popular South Korean chatbot called Lee Luda was suspended from Facebook messenger in early 2021 within three weeks of its launch. It was meant to take on the persona of a young female university student, and it was trained on billions of conversations from a popular messaging app, KakaoTalk. Luda impressed users with its responses and smooth handle of social media slang, but quickly came under fire as it began to exhibit discriminatory, sexually explicit, and racist behavior as manipulative users coaxed the bot into explicit, harmful conversations. 

Artificial intelligence should be monitored with a tighter leash because machines learn extremely well (sometimes too well). I think the explosive onset of machine learning development has been too rapid for proper ethics and regulatory scaffolding to be implemented, resulting in a moral sinkhole at times. Algorithms that engage regularly with humans and continue to learn must be regulated constantly -- it’s not something that can be engineered and then let loose into the public with little caution. The responsibility tied to a designed product is continuous, it prevails and may even increase with time. It ties into Principle 8.11 -- “design is the embodied conscience of technology” (pg. 417) -- because we as humans bestow technology with a conscience, so we must take care of not only initial design processes but how its conscience evolves through exposure and interaction with users. Much of technology thus holds a reflection of society as it exists today, including its flaws and imperfections. As society continues to grow and evolve, we must make sure the technology in place also grows and evolves, but in a way that is not harmful. It leads to difficult questions like: How is the data for such algorithms being collected? Who is conducting the data collection -- how transparent are they about their process, how much do they care about their process in itself? How are algorithms being moderated? And again, who is doing the moderation?