Week 5: Chapter 5
I used the sanity test mentioned above on my project and want to share this introspective process.
First I attempted to reflect on the first question: Does the end product justify the technology?
This question is very hard to answer because I do not know what the end product will look like. At the moment, through the hours of work I’ve put in, the sequencer is very delayed and therefore the technology is not 「justified.」 My sequencer involves shooting guns at instruments to make sounds. Because there is currently a delay, the sequencer is sort of useless. I am fairly confident I could fix this in the coming week though. I am still in the process of implementing chuck and I plan on adding a ray caster instead of a bullet to trigger chuck to play the wav file as this would be less expensive processing power. If this is done correctly and the delay is gone/reduced, then the end product will certainly justify the technology. Playing instruments (and having them light up) using a gun could only be done in a virtual reality world. I considered just making a computer game but I believe that the experience would be heightened in VR. In the end, a human could never manage to play multiple instruments at the same time, and loop them, the way that this sequencer allows you to.
The second question has a similar answer to the first. I believe that you could not recreate this experience outside of virtual reality. The experience is only a unique experience due to the combination of processes. In real life, you can shoot a gun at objects and play instruments, but you can not use a gun to play music. Likewise, in real life, you can use a digital sequencer to make music but you often do not get the experience of visualizing the instrument, meaning seeing an actual kick/snare/marimba (and see it light up), rather than just a square button. Theoretically, you could run this program on a computer, but there would be limitations.
The third question is a definite yes. One issue I had when considering this project is the tempo. Rather than a sound metronome, which could get in the way of the music, or a visual metronome, which may be difficult to synchronize with, I decided to utilize the oculus controller vibrations. Both humans and computers have a sense of rhythm but synchronizing the two can prove to be difficult in real life. As a guitarist, I would always have to use a auditory metronome when trying to play songs. The constant beeping would always take away from the music a bit for me. I think vibrational cues can be a much better option, and because I do not need to pluck strings or hold a pick, I have my hands free to feel these vibrations. The sound-visual interplay is also an important part of the computer-human relationship. Humans naturally enjoy the aesthetics of visuals and computers are the only way to sync visuals to sound. Humans have the ability to aim the controller and pull the trigger but rely on the computer to do the rest of the work and create the visual/auditory experience that will hopefully invoke an emotional response.
Overall, after going through this checklist, I feel a bit more confident with my project and am excited to move forward.
Design Etude: Part 1:
Marimba: The marimba a is very basic, non-technologically based, acoustic instrument. All it requires is wooden mallets to bang on the wood pieces that are held at both ends. At various lengths, these wooden pieces output distinct pitches that can be used to create beautiful music. The time mapping is entirely up to the person playing the marimba to keep time and the notes are half steps apart
Logic Pro X: Logic can take in multiple different inputs sources. You can create sound on Logic using pre-made audio files, add midi by manually creating notes within the sequencer, add midi via an external instrument (such as an electric keyboard), record audio, or use pre-made samples found on Logic. The output comes from your computer or speaker after being processed. This output can contain one or all of the aforementioned input sources. The time mapping can be done through a metronome (live mapping) or through a midi. The generic midi mapping is done through half steps but you can access any pitch or sound with the program.
Here is a snippet of me playing the Powerade: