Difference between revisions of "220a-spring-2021/hw3"

From CCRMA Wiki
Jump to: navigation, search
(Part 1: Auditory Streaming)
Line 1: Line 1:
 
= Homework #3: Composing with Perception =
 
= Homework #3: Composing with Perception =
  
In this homework, you are to experiment (and have fun) with various auditory perceptual phenomena, understand how to induce their effects in listeners, and explore their use in a creative contextParts 1-3 of the homework examine, respectively, 1) auditory stream segregation, 2) Shepard tones, and 3) the art of building/shaping expectation through sound. Part 4 asks you to create a musical statement that makes musical use of two or more perceptual phenomena (either from this homework or as discussed in class).
+
In this open-ended final project, you are to devise your own projectThe only requirement is that you
 +
* “do something interesting with computer-generated sound”
 +
* incorporate audio programming in some way
 +
* produce a polished media (audio or audio-video) showcasing or documenting your project
  
=== Due Date ===
+
This can take one of many forms, including:
* milestone: 2021.5.5 (in-class) Wednesday
+
* a musical statement, possibly incorporating topics from Music 220A
* final deliverables due: 2021.5.11, 11:59pm Tuesday
+
* a (computer) music video!
* in-class listening: 2021.5.12, Wednesday
+
* a software system to support musical composition, performance, and/or other creative use
 +
* A structured and documented exploration into a particular topic from 220A, possibly as a teaching guide or resource for a particular audience (e.g., a webpage illustrating a particular synthesis technique or a suite of code example and visualizations to illustrate a particular perceptual phenomenon)
 +
* or ? (talk to us!)
  
[[File:Akioshi-Kitaoka-24.jpg|360px|a hand-drawn sketch of a unsettling scene of music synthesis]]
 
  
=== Part 1: Auditory Streaming ===
+
=== Due Date ===
Auditory stream segregation (and fusion) pertains to how a listener perceives a sequence or a mixture of sounds, grouping them into meaningful subsets (e.g., “melody”, “voice”, etc.) that can be called “streams”. [http://webpages.mcgill.ca/staff/Group2/abregm1/web/snd/Track01.mp3 Here's Al Bregman's] 1971 example of a cycle of 6 tones.
+
* milestone 1: 2021.5.17 (in-class) Monday
 
+
* milestone 2: 2021.5.24 (in-class) Monday
For instance, depending on how a [https://en.wikipedia.org/wiki/Monophony monophonic] sequence of notes are presented—with considerations to pitch, tempo, timbre, loudness—it is possible to suggest different groupings over time. In class we listened to J. S. Bach’s [https://en.wikipedia.org/wiki/Partita_for_Violin_No._3_(Bach) Partita No. 3] in E Major (Prelude), where a single monophonic violin voice alternates between increasingly large pitch intervals. When played rapidly enough, the overall effect can begin to sound like two (or even three) separate streams, even though the violin never articulates more than one note at any given time. This would be an example of auditory streaming, induced by control of pitch (e.g., taking into account the our tendency to group high notes together), tempo (i.e., we begin to perceive more than one stream when it is played sufficiently fast), and even timbre and loudness (as a consequence of large pitch intervals; the higher notes can sound brighter and louder.). Other examples of auditory streaming induced by control of pitch can be found in a variety of musical genres and contexts. For example, listen to [https://www.youtube.com/watch?v=6p79aSQO4oM this Charles Mingus bass solo], which begins with a concerted effort to create high and low pitch regions (or “streams”) by jumping between strings/registers quite often. Jazz bassists and other experienced improvisers often employ this idea of pitch-based stream separation while crafting musical lines.
+
* final deliverables due: 2021.6.2 (11:59pm) Wednesday
 
+
* listening party: (time and format TBA)
While the previous examples primarily employed pitch variations, It is also possible to induce auditory stream segregation by intentionally varying the timbre and tempo.  In this ChucK example ([https://ccrma.stanford.edu/courses/220a/ck/stream-timbre.ck stream-timbre.ck]), a simple 4-note ascending sequence is played increasingly faster and against a cycle of 3 different timbres. The end result includes a stream that sounds like a slower descending sequence.
+
 
+
Now, it’s our turn to play with this:
+
* Begin by creating a monophonic repeating sequence of notes. (Here is some starter code, which simply plays a melodic sequence; please feel free to build on this or experiment with different sequences). You don’t turn in anything for this step.
+
* (1a-tempo.ck) Vary smoothly between two tempi: one slower, in which you perceive the sequence a single stream, and the other faster, so it’s perceivable as two (or more) streams.  Feel free to try this with different note sequences (e.g., make them alternate between larger/smaller intervals; or try alternating between 2-, 3-, and 4- notes) and observe the extent it strengthens or weakens the streaming “effect”.
+
*(1b-velocity-timbre.ck) Next, create a new note sequence.  This time, play with giving special attention to note loudness (“velocity”) and timbre (e.g., using different filter cutoffs, envelope settings, oscillators, STK instruments, or other UGens), to strengthen the perception of more than one stream.
+
*(1c-etude.ck) Create a miniature musical statement (15-30 seconds) using this idea.  Turn in both the ChucK file(s) and the wave file for your mini-musical statement.  Try to do as much of this in code as possible; you may assemble the final statement either fully in ChucK or by using a DAW (Audacity, Reaper, Logic, etc.) to arrange materials created in ChucK.
+
 
+
=== Part 2: Shepard Tones ===
+
A [https://en.wikipedia.org/wiki/Shepard_tone Shepard Tone] is constructed by adding sine waves at octave frequency intervals, with amplitudes weighted under a bell-shaped curve.  By moving all the sine wave frequencies up or down (while still respecting the amplitude curve), it is possible to create the auditory illusion of an endlessly rising or falling scale / tone.
+
 
+
There are numerous examples of Shepard tones used in film and music, especially to create moments of tension, suspense, etc. [https://www.youtube.com/watch?v=LVWTQcZbLgY This video] briefly describes Shepard tones and highlights some of these examples.
+
 
+
* Start with [http://chuck.stanford.edu/doc/examples/deep/shepard.ck shepard.ck], a basic Shepard Tone program in ChucK.
+
** You can download the above file or, in MiniAudicle, go to File > Open Example > deep > shepard.ck
+
* Run and listen to shepard.ck
+
* Play with the Shepard Tone in the following ways:
+
** (2a-direction.ck) Change the direction from a falling tone to a rising tone (hint: it’s one number in this implementation)
+
** (2b-speed.ck) Change the speed (hint: it’s the same number)
+
** (2c-stack.ck) Create a chord by stacking three Shepard Tones together — this is less trivial and will require some [https://en.wikipedia.org/wiki/Code_refactoring refactoring of this code].
+
** (2d-stop-and-go.ck) Make it so that the Shepard Tone chord can “stop and go” on demand (i.e., momentarily pause the change of frequencies)
+
* Create a miniature musical statement (15-30 seconds) that incorporates Shepard Tones in a substantial way; consider using shepard tones, shepard chords, amplitude-enveloped Shepard tones or chords, or the stop-and-go control of Shepard chords; free feel to mix Shepard Tones with non-Shepard Tone elements (like percussion or synths).  Try to do as much of this in code as possible; you may assemble the final statement either fully in ChucK or by using a DAW to arrange materials created in ChucK.
+
 
+
=== Part 3: The Art of Building Expectation: Oh The Drops ===
+
At some level, one might [https://mitpress.mit.edu/books/sweet-anticipation say] that music is all about building expectations—and then resolving and/or artfully subverting them. This shaping of expectations, of course, can be achieved in many different ways.  One approach is through the use of contrast: by paying attention to and playing with both what is present and what is not.  Example: if a section of music consists of only mid-range and higher frequencies, the introduction of low frequencies can feel surprising yet intentional (and even “inevitable”), satisfying, and purposive (i.e., a sense that something belongs, or a sense that it serves a function even when it doesn’t in practice—which, if one thinks about it, could be said of music?  BTW for those wanting to think really hard about this idea, check out [https://plato.stanford.edu/entries/kant-aesthetics/#3.1 Immanuel Kant’s notion of Purposiveness]).  This idea of playing with contrast to shape expectation can employ setting up variations in frequency, time, dynamics, harmony, timbre, texture—anything that you can control (which, with computer music, is everything).  In music, especially in EDM, there is the concept of [https://en.wikipedia.org/wiki/Drop_(music) The Drop], which uses a few of these variations to build a [https://www.youtube.com/watch?v=XCawU6BE8P8 sometimes over-the-top]  sense of anticipation and tension, leading up to an emphatic resolution.  In this part of the assignment, we are to create a few different kinds of musical “drops”.
+
  
* Gather two of three of your favorite “drops” (in EDM or any other forms of music), listen to them critically, reflect, and comment on what makes them effective as a “drop”.  Make use of English (and not ChucK) for this part.
 
* (3a-freq.ck) engineer a frequency-based “drop” moment by setting the expectations using frequencies during a “setup” phase, leading into a “resolution”.  Example: there is nothing, by design, in the low register for a long time—followed by an introduction of low frequency elements, or vice versa.) A few questions: How will you set up “the drop”? Will you have a rise/falling train of frequencies?  How will you introduce “the drop”? Suddenly? Gracefully? Gradually? Sneakily? ???  How might different approaches result in different perceived musical outcomes?
 
* (3b-timbre.ck) similarly to the above, engineer a timbre-based “drop”. We will leave this mostly to your imagination and design: attempt to create a moment where the “drop” comes in the form of a timbral shift (e.g., sound gets brighter or more muffled; or a new timbre is introduced, holding other elements the same).  How will you prepare for and build to this “drop” so that it sounds intentional?  For an example, a low pass filter is applied in [https://www.youtube.com/watch?v=qzU9OrZlKb8&list=RDqzU9OrZlKb8&t=185s Britney Spears’s “Until The World Ends”, starting around 3 minutes 05 seconds], curtailing the high frequencies; the sound gradually “brightens up”, until the drop where you are hit with a wide frequency spectrum (as if a window has been flung open to let in the sunlight).
 
* (3c-time.ck) next, create a temporal-based “drop”. Feel free to explore rhythmic build-ups, dramatic pauses, [https://people.carleton.edu/~jlondon “metric fake outs”], or other “drops”.  By the way, in the same Britney Spears “drop” above, note the brief pause in the non-vocal elements right before the drop, creating a sense of suspended time, before everything comes back to life.
 
* (3d-the-drop.ck) Alrighty, now it’s time to make your own “drop” in the form of a miniature musical statement (20-30 seconds).  It is to consist of a setup and a resolution, taking into account and possibly combining some of what you’ve worked with so far.  For example, what if you made an EDM-esque drop with a Shepard Tone buildup (which can literally go forever)?  Remember, your drop can be as over-the-top or as subtle as you’d like, but try to be intentional in your choices.  Try to do as much of this in code as possible; you may assemble the final statement either fully in ChucK or by using a DAW to arrange materials created in ChucK. Turn in both your ChucK code and a recording of your drop.
 
  
=== 4. Musical Statement ===
+
=== Milestones ===
Create a musical statement (1-2 minutes) that makes use of at least two perceptual phenomena from Parts 1-3 and from class.  In making use of the various phenomena in a creative context, Artful Design Principle 4.8 recommends “Experiment to ‘illogical extremes’ (and pull back according to taste), meaning in audiovisual programming you should freely experiment with the possibilities to know what is “too much” and then dialing things back to your taste. Try to do as much of this in code as possible; you may assemble the final statement either fully in ChucK or by using a DAW to arrange sections created in ChucK.
+
* for Milestone 1 (5/17), please come to class with three ideas (ideally as different as possible from one another) and be prepared to talk about them; choose one of three ideas and do some preliminary work (code, score, design, etc.)
 +
* for Milestone 2 (5/24), please demonstrate a functioning work-in-progress. Include anything on your webpage that's helpful to talk about your explorations and thinking for this milestone.
 +
* as always, please be prepared to offer constructive feedback to others
  
=== Milestone ===
 
* For this milestone, we are primarily interested in a work-in-progress version of any of the musical statements—and that will be all you are expected to have on your website at this point.  However, feel free to include anything on your webpage that's helpful to talk about your explorations and thinking for this milestone.
 
* Please be prepared to share your work-in-progress and offer feedback to others in class on Wednesday (5/5)
 
  
=== Final Homework Deliverables ===
+
=== Deliverables ===
'''turn in all files by putting them in your 220a CCRMA webpage and submit ONLY your webpage URL to Canvas (in the form of https://ccrma.stanford.edu/~YOURID/220a/hw3"'''
+
'''turn in all files by putting them in your 220a CCRMA webpage and submit ONLY your webpage URL to Canvas (in the form of https://ccrma.stanford.edu/~YOURID/220a/final"'''
  
Your hw3 webpage should include:
+
Your "final" webpage should include:
* 1) ChucK (.ck) files, as applicable, for Parts 1 through 4
+
* 1) all ChucK (.ck) files and sound (.wav) files, as applicable
* 2) sound (.wav) files, as applicable, for Parts 1 through 4
+
* 2) polished "portfolio grade" documentation either as a video or an audio recording.
* 3) comments and reflections as you work through the homework
+
* 3) notes/title/user manual, as applicable, for the final product
* 4) notes/title for you musical statement (Part 4)
+
* 4) comments and reflections as you work through the project
 
* 5) submit ONLY your webpage URL to Canvas
 
* 5) submit ONLY your webpage URL to Canvas

Revision as of 13:59, 12 May 2021

Homework #3: Composing with Perception

In this open-ended final project, you are to devise your own project. The only requirement is that you

  • “do something interesting with computer-generated sound”
  • incorporate audio programming in some way
  • produce a polished media (audio or audio-video) showcasing or documenting your project

This can take one of many forms, including:

  • a musical statement, possibly incorporating topics from Music 220A
  • a (computer) music video!
  • a software system to support musical composition, performance, and/or other creative use
  • A structured and documented exploration into a particular topic from 220A, possibly as a teaching guide or resource for a particular audience (e.g., a webpage illustrating a particular synthesis technique or a suite of code example and visualizations to illustrate a particular perceptual phenomenon)
  • or ? (talk to us!)


Due Date

  • milestone 1: 2021.5.17 (in-class) Monday
  • milestone 2: 2021.5.24 (in-class) Monday
  • final deliverables due: 2021.6.2 (11:59pm) Wednesday
  • listening party: (time and format TBA)


Milestones

  • for Milestone 1 (5/17), please come to class with three ideas (ideally as different as possible from one another) and be prepared to talk about them; choose one of three ideas and do some preliminary work (code, score, design, etc.)
  • for Milestone 2 (5/24), please demonstrate a functioning work-in-progress. Include anything on your webpage that's helpful to talk about your explorations and thinking for this milestone.
  • as always, please be prepared to offer constructive feedback to others


Deliverables

turn in all files by putting them in your 220a CCRMA webpage and submit ONLY your webpage URL to Canvas (in the form of https://ccrma.stanford.edu/~YOURID/220a/final"

Your "final" webpage should include:

  • 1) all ChucK (.ck) files and sound (.wav) files, as applicable
  • 2) polished "portfolio grade" documentation either as a video or an audio recording.
  • 3) notes/title/user manual, as applicable, for the final product
  • 4) comments and reflections as you work through the project
  • 5) submit ONLY your webpage URL to Canvas