Reading Response Chapter 8

Laura Schütz

Chapter 8 - Manifesto

This chapter has been the most intriguing read of the book as it poses the core questions that we as technology makers and shapers should consider in our everyday actions. Chapter 8 introduces Frankenstein as an analogy for the complex relation between the creator and the things we create. It illustrates how science and morality, how needs and values need to be thought of simultaneously when designing technology.

Principle 8.11 – “Design is the embodied conscience of technology (for technology has no conscience of its own)” is worthwhile discussing due to its relevance for the design of intelligent systems and AI. Robots and self-driving cars are presented in chapter 8 as exemplary systems that have decision making power. In those cases, a human, the person who creates the product, who writes the code defines the humanity, the morality, and the values of the technology. In introductory Product Design classes one of the first principles thought is that a designer should not design for themselves, but for their users. As designers we are not to assume that our own needs and values are the same as our intended users’. This being said, it is easy to design for a narrow target user group (e.g. people suffering from insomnia). With universally used, intelligent systems the question arises, who do we design for? There is no specific user group with distinct needs and values but every human on this planet will interact with a technology such as a voice assistant or an autonomous car. To design for all of humanity and to account for different values, design teams have become racially and culturally more diverse. However, even a diverse group of designers can only represent so many values at once. So how do we design technology and decide upon its values and conscience if there is not even a common set of values among humans? Do we even have a shared understanding of moral and do we believe in acting out of intrinsic value as Kant’s categorical imperative suggests? Navigating the ambiguity of what is good and what is bad, which technology deserves a consciousness, and which does not is part of a designer’s job. Some decisions impose a god-like responsibility on humanity that I am not sure we are made for.

All these ethical questions around designing technology are awfully hard to navigate. Chapter 8 begins with the history of how we develop new technologies and only after we expose them to the world find out about their implications and moral dilemma they cause. But if we decide about the “right” moral for technology before we release it to the world, how do we decide which are the right values to “imbue” (how Ge likes to say)?