Chapter 3: Visual Design

 

From this week’s reading I am responding to Principal 3.7: Prompt Users to Experience Substance (Not Technology), whereby “good design should use the medium to highlight a narrative, while hiding the medium itself”. 

 

Having this included in a design process is interesting to me because I find it beautiful when users can experience substance with a hidden medium. I feel like there is such a huge gap between the technology that is available to create art and content that artists put out there that is digestible for the wider population. It’s not bad, it certainly has its roles in exhibiting the potential and possibilities of technology in art. But what more can we do with it to make better design? 

 

Apologies for the fact that this isn’t really a sound/graphics/interaction example but I will take data sonification for example–specifically climate data. In its simplest form we take thousands of years of CO2 data and map it to frequencies, use a sine wave, shrink it to a shorter, more perceptible time for humans. What is the impact for the wider audience? We are listening to data that is very nonmusical in the conventional sense. Is it intended for the wider audience? Could it be? I think it could, but it comes down to the question of intention–what if there is a way we can get people to emotionally connect with climate change in music the way they might in a painting, or a film? But why would we want to do that? Because if we want change or to start discourse among the wider audiences we could approach with appealing to the pathos. What if we made the musical form familiar, and have these climate parameters control something else? Normal melodies, harmonies, rhythms–but maybe CO2 is mapped to an automated distortion. Maybe the performance takes place in an immersive planetarium. Maybe the sea level change or ocean pH levels are mapped to something about the visuals, besides the main thing. The work highlights a narrative in a still engaging way, the medium isn’t completely hidden but elicits perhaps a physical reaction to the distortion–it still works to reach the audience with a combination of familiar form and chaos.

 

Now for a sound/graphics/interaction example, I had an idea to make a laser harp installation. It started off as just a bunch of lasers that you trigger and it makes a sound. But then I thought of ways to make it more accessible to people who are not musicians. So what if it wasn’t really an instrument? In an ideal world where lasers are not dangerous to accidentally look at, what if there was a bunch of lasers shooting down from the ceiling, spanning a room, let’s say 10 by 10 (lasers), in a minor pentatonic scale or something so that no matter what is triggered it sounds “good”. Maybe one laser triggers smoke machine and then the “instrument” is revealed. Also what if you “played the instrument” by moving/dancing? It is an instrument that does not require musical skill, but invites movement and dancing. It just looks like a normal room. But it’s an instrument! And you become a musical AND dancer! 

Sorry that wasn’t really a graphics example.

MILESTONE 2 VIDEO

https://www.youtube.com/watch?v=ctdA-I1kFys

 

So far I have created my Fractal Game Objects. I've had quite a challenge navigating the script for the Spectrum and figuring out what it's doing. I had fun getting the fractal to build and look forward to getting it to just work then be creative. I started with the 3D fractal tutorial by Youtube Channel Angry Carrot and messed with some parameters and replicated the objects. I spent a long time breaking my brain trying to understand self-reference in C#. At this point I am putting my faith in the process. I have been brainstorming ideas on what parameters to map to my fractals. 

1. Divide the spectrum into 3 frequency bands: Low/Mid/High. Map Magnitude to max depth. Incorporate colors somehow.

2. If things get spicy, divide spectrum into the Bark critical bands (24)