Reading Response #5

Artful Design Chapter 5: “Interface Design”

 

I admit that prior to reading this chapter, I was a big proponent of “the mapping is the hard part” and “the mapping is what makes the instrument.” When designing new digital instruments and interfaces, I felt that was the most challenging aspect to implement when it came to the “input, mapping, output” design paradigm. After reading Perry Cook’s perspective on this, such as his “three reasons for intimacy loss” followed by Principle 5.17: “Embody!”, I can see that I just haven’t been giving the other components the depth that they deserve.

“Lack of haptic feedback from controller to player” is such an obvious downside in hindsight but something I never gave too much thought to in my own designs. Why do I prefer my basic Keychron mechanical keyboard over the Apple Magic Keyboard or my built-in Mac keyboard? They both serve the same function, output the same results, and have the same latency from pressing a key to seeing the character appear on screen. But I love my Keychron because of the clicking sound, the resistance of the key requiring extra force to push it down, and the long key travel. The tactile experience makes it more fun to use and the ergonomic experience “allows [me] to think less about how to control it,” allowing me to type faster. 

The three components that allow us to feel embodiment—haptic feedback, high fidelity, and sound from the source—make a lot of sense when it comes to new physical devices, but I wonder if this is possible to achieve when the human isn’t actually in contact with any device. I was involved in an interactive, intermedia performance at CNMAT during my undergrad, and one of the main components of the performance was an Xbox Kinect that used skeletal data to manipulate a projected animation and modulate the timbral qualities of the soundscape. It’s easy to see that the Xbox Kinect here is an instrument, human gestures are the input, and the modulates to the sound and visuals are the output; but is it a very engaging instrument? Does the human performer feel like an integral part of the system? Maybe, in this case, the sound should come from the human (e.g. speakers attached to the person’s body) instead of speakers placed around the room (or next to the Kinect), since their gestures are what actually affect the sound. But what about haptic feedback?

A lot of my work so far has revolved around software, which humans can often find cold and detached (a notion that is discussed in the computers vs. humans dichotomy presented in the SLORK section of the chapter). Although we typically interact with software tactilely through mice and keyboards, what about voice control? Physical gestures picked up by the webcam? Will there always be a “loss of intimacy” in these cases? I don’t know the answer to these questions yet, but going forward I will always keep in mind the human component (arguably the most important part!) of the design equation.

Back