https://ccrma.stanford.edu/mediawiki/api.php?action=feedcontributions&user=Srsmith&feedformat=atomCCRMA Wiki - User contributions [en]2024-03-29T10:51:25ZUser contributionsMediaWiki 1.24.1https://ccrma.stanford.edu/mediawiki/index.php?title=Music_250a_Lab_Video_Wiki&diff=13824Music 250a Lab Video Wiki2012-11-09T18:12:12Z<p>Srsmith: /* Lab 6: Graphics */</p>
<hr />
<div>== Music 250a - Autumn 2012 ==<br />
<br />
=== Lab 1: Making Music with Pd ===<br />
<br />
Romain Michon [[http://youtu.be/O9-llQIRhuI Simple Granular Synth]]<br />
<br />
Chris Beachy [[http://www.youtube.com/watch?v=HXyMCFmgbBU FM Synth]]<br />
<br />
Beau Silver [[http://youtu.be/AXp1_bZCHPU Keyboard Play 1]] - [[http://youtu.be/0U1S4c__XHQ Keyboard Play 2]]<br />
<br />
Afrooz Family [[https://ccrma.stanford.edu/~afrooz/250a/lab1/lab1.html Lab 1 Writeup]]<br />
<br />
Evan Gitterman [[http://www.youtube.com/watch?v=emXpfsXTJlI Experimenting with PD]]<br />
<br />
Jimmy Tobin [[http://www.youtube.com/watch?v=l9O8vbuH2mU Lab1]]<br />
<br />
=== Lab 2: Arduino & Sensors===<br />
<br />
Eoin Callery [[http://youtu.be/E5nOmCdYJBA Light and Bend Sensor]]<br />
<br />
Beau Silver [[http://youtu.be/WO5VIcRpzxk Button Drum Machine]]<br />
<br />
Jennifer Hsu [[http://youtu.be/45q9db2BM2M Feedback FM+]]<br />
<br />
Myles Borins [[https://secure.vimeo.com/51204737 FM experimentation]]<br />
<br />
Matt Alexander [[http://youtu.be/HYP8utzuWlU FM Modulated Synth]]<br />
<br />
Romain Michon [[http://youtu.be/PT161GsCnVw Clarinet Physical Model]]<br />
<br />
Tim O'Brien [[http://youtu.be/l5iXLTEbpU8 Squeezebox]]<br />
<br />
Priya Shekar [[http://youtu.be/bp3AHXZAkmI BeatTrip]]<br />
<br />
Cecilia Wu [[https://vimeo.com/51269334 FM Duet]] with the password cecilia<br />
<br />
Ivan Naranjo [[http://www.youtube.com/watch?v=9_GdbqExaJs Multiple Sines + FM]]<br />
<br />
Yoo Hsiu Yeh [[http://youtu.be/mRZGO_JQQaM Everybody Talks]]<br />
<br />
Evan Gitterman [[http://www.youtube.com/watch?v=UOcYBGYDcv0 Accelerometer LFO Synth]]<br />
<br />
Chris Beachy [[http://youtu.be/lf5ypub-RI4 Force Sensor Sine Wave]]<br />
<br />
Sarah Smith [[http://youtu.be/7miob1px2BI Force controlled ADSR]]<br />
<br />
Ilias Karim [[http://youtu.be/S8c6eYyxXno Scales]]<br />
<br />
Alice Fang [[http://youtu.be/QB3aSAz-ymo Modrum]]<br />
<br />
Helen Chavez [[http://www.youtube.com/watch?v=Xb2crIlJFfc&feature=youtu.be Weird Sounds: Phasors/Sine Waves]]<br />
<br />
Jimmy Tobin [[http://www.youtube.com/watch?v=R7AMOHZynos Under Pressure]]<br />
<br />
Lulu DeBoer "the clicker" [[http://www.youtube.com/watch?v=WnD0Ov-jSWs&feature=results_video]]<br />
<br />
Erin Baumann [[http://www.youtube.com/watch?v=DFB5m7vXPqQ&feature=youtu.be]]<br />
<br />
David Meisenholder [[http://youtu.be/y80B73SyCwE LED Band]]<br />
<br />
Afrooz Family [[https://ccrma.stanford.edu/~afrooz/250a/lab2.html Sensor drum machine]]<br />
<br />
===Lab 3: Firmware Programming===<br />
<br />
Beau Silver [[http://youtu.be/_lGrsfMvXqw Air Piano]]<br />
<br />
Eoin Callery [[http://youtu.be/H5CrsirZp5o Angry Animal]]<br />
<br />
Tim O'Brien [[http://youtu.be/7yNdcolLj_A Guitars, sampled]]<br />
<br />
Jennifer Hsu [[http://youtu.be/92Xj1c4Ue_I things]]<br />
<br />
Sarah Smith [[http://youtu.be/iJo_xAvTrhs FM Modulated FM]]<br />
<br />
Iván Naranjo [[http://www.youtube.com/watch?v=a2Zdd9qnd-I Filtered Noise + Delayed Pick up mic on Computer]]<br />
<br />
Myles Borins [[https://secure.vimeo.com/51718468 Bellaccelerometer]]<br />
<br />
Helen Chavez [[http://www.youtube.com/watch?v=rKctnuCasD4 FM Synth Play]]<br />
<br />
Romain Michon [[http://youtu.be/8_8R3j3uXvw Voice Synthesizer]]<br />
<br />
Cecilia Wu [[https://vimeo.com/51733584 Crazy knob]] password cecilia<br />
<br />
Evan Gitterman [[http://youtu.be/yvfeRTqH4ls FFT & FM Kazoo]]<br />
<br />
Yoo Hsiu Yeh [[http://youtu.be/PogunVUr12o Happy Halloween]]<br />
<br />
Jimmy Tobin [[http://www.youtube.com/watch?v=FT8pmS4R2hw Dancing Flex Sensor]]<br />
<br />
Chris Beachy [[http://www.youtube.com/watch?v=dmlfarL4bTw&feature=youtu.be Chord Tunerator]]<br />
<br />
Ilias Karim [[http://youtu.be/BaV502vB3y0]]<br />
<br />
Priya Shekar [http://youtu.be/bp3AHXZAkmI BeatTrip v3]<br />
<br />
Erin Baumann [http://www.youtube.com/watch?v=9h1gY85rK2E&feature=youtu.be]<br />
<br />
Matt Alexander [[http://youtu.be/KY_aa1i0MaI Bells]]<br />
<br />
===Lab 5: Mini-Instrument ===<br />
<br />
Beau Silver [[http://youtu.be/Vkm3dQClwSU Solenoid Marimba]]<br />
<br />
Tim O'Brien [[http://youtu.be/nRecVlmUlQY The Helmet]]<br />
<br />
Jennifer Hsu [[http://youtu.be/aUCNgo-DrV4 hex]]<br />
<br />
Iván Naranjo [[http://youtu.be/ALejsFa5YeM Every Thought emits a Throw of Dice]]<br />
<br />
Yoo Hsiu Yeh [[http://youtu.be/pl_zzkVLBIs Anxious Cat Box]]<br />
<br />
Evan Gitterman [[https://www.youtube.com/watch?v=r1puTFQj2hs Headzoo]]<br />
<br />
Matt Alexander [[http://youtu.be/Nt8aH0Gm0gE Distance Sensor Keyboard]]<br />
<br />
Cecilia Wu [[https://vimeo.com/52676601 Tibetan Synth bowl]] password cecilia<br />
<br />
Romain Michon [[http://youtu.be/KKsi9Mr0jvw The Féraillophone]]<br />
<br />
Lulu DeBoer [[http://www.youtube.com/watch?v=koyzi-TvPbU&feature=plcp Scratch Board]]<br />
<br />
Eoin Callery [[http://youtu.be/vWSJ5P7MuwM The utterly inappropriately named Smoking Phallus]]<br />
<br />
Ilias Karim [[http://youtu.be/NuC5woFR3LE Master Mic]]<br />
<br />
Chris Beachy [[http://www.youtube.com/watch?v=fV166iYWe_g&feature=youtu.be Dirty Theremin]]<br />
<br />
Sarah Smith [[http://youtu.be/x9iH_hDygyE Organ-Book]]<br />
<br />
Alex Hay [[http://www.youtube.com/watch?v=NfbJWPDP27Q&feature=g-upl Thumbduino]]<br />
<br />
Priya Shekar [[http://youtu.be/hKZZFm--34o BeatTrip The Halloween Special]]<br />
<br />
David Meisenholder [[http://youtu.be/-R3vBhcqNFQ Bass Massager]]<br />
<br />
Myles Borins [[https://secure.vimeo.com/52651981 Tilty.js]] [[https://www.youtube.com/watch?v=nRUi3bQjzAo&feature=youtu.be Performed by Gina]]<br />
<br />
Afrooz Family [[https://ccrma.stanford.edu/~afrooz/250a/lab5.html Water wheel beat box]]<br />
<br />
Erin Baumann [[http://www.youtube.com/watch?v=8qOcZehoqy4&feature=youtu.be The Annoyance Hat (performed by Alice Fang)]]<br />
<br />
Jimmy Tobin [[http://www.youtube.com/watch?v=I7QLBd5fot0 The Invisible Trumpet]]<br />
<br />
===Lab 6: Graphics===<br />
<br />
Beau Silver [[http://youtu.be/gQ2xm2VzeIU Sound Blob From Inner Space]]<br />
<br />
Romain Michon [[http://youtu.be/falnld-YGv4 Virtual Tibetan Bowl]]<br />
<br />
Tim O'Brien [[http://youtu.be/pMdMSy7mPow Sputnik]]<br />
<br />
Myles Borins [[https://secure.vimeo.com/53103218 Tilty.js with Viz]]<br />
<br />
Iván Naranjo [[http://youtu.be/cL1J5xzkebE simple Interaction from input ]]<br />
<br />
Jennifer Hsu [[http://youtu.be/9d4WygZzCnY blooooop]]<br />
<br />
Priya Shekar [[http://www.youtube.com/watch?v=8g7dtHEoz84 BeatTrip v5 feat. Nyan Cat]]<br />
<br />
Evan Gitterman [[http://youtu.be/fWdNjeMuFeg Accelerometer Pong]]<br />
<br />
Yoo Hsiu Yeh [[http://youtu.be/XshhZk2162g I eat you, you eat me. Please come back.]]<br />
<br />
Sarah Smith [[http://youtu.be/mQ1jqcqeNYU Organ Book Visualization]]<br />
<br />
== Music 250a - Autumn 2011 ==<br />
<br />
=== Lab 2: Arduino & Sensors===<br />
* Lab 2 [https://ccrma.stanford.edu/wiki/Talk:250a_Microcontroller_%26_Sensors_Lab_Pd instructions]<br />
<br />
Jacob Wittenberg [[http://www.youtube.com/watch?v=82P7cX4cAoE video]]<br />
<br />
John Granzow [[http://www.youtube.com/watch?v=Pkz5ymua5QQ video]]<br />
<br />
Kevin Ho [[http://vimeo.com/30418793 video]]<br />
<br />
Hongchang Choi [[http://www.youtube.com/watch?v=abNF1xXhNNk video]]<br />
<br />
Joel Sadler [[http://www.youtube.com/watch?v=hJYS5OI1maQ Thriller fo the FrakenSine~]]<br />
<br />
Pankaj Sharma [[http://www.youtube.com/watch?v=NLSJ39CWeUk Electric Tambura]]<br />
<br />
Ben Broer [[http://www.youtube.com/watch?v=Kg_aSxQ0BOE ReverBrain]]<br />
<br />
Remington Wong [[http://www.youtube.com/watch?v=JsjvaCs-q98 video]]<br />
<br />
Evan Lee [[http://www.youtube.com/watch?v=h5SfKpuqQDc video]]<br />
<br />
Jeff Rowell [[http://www.youtube.com/watch?v=6jzTjZkpX2Y video]]<br />
<br />
Derek Mendez [[http://www.youtube.com/watch?v=EeF-L1qO11g flex resistor]]<br />
<br />
Derek Tingle [[http://www.youtube.com/watch?v=wYBMpW_dYSg accelerometer controlled frequency modulator]]<br />
<br />
===Lab 3: Firmware Programming===<br />
* Lab 3 [http://ccrma.stanford.edu/wiki/250a_Firmware_Lab instructions]<br />
Hongchan Choi [[http://www.youtube.com/watch?v=MywdIkgEInY video]]<br />
<br />
Jacob Wittenberg [[http://www.youtube.com/watch?v=axt6v14mu30 video]]<br />
<br />
John Granzow [[http://www.youtube.com/watch?v=j5A-BfSRUEA ligeti poem/cowell rhythmicon]]<br />
<br />
Pankaj Sharma [[http://www.youtube.com/watch?v=W1GD_vhinp4 video]]<br />
<br />
Remington Wong [[http://www.youtube.com/watch?v=3O_7R_nO4po video]]<br />
<br />
Joel Sadler [[http://www.youtube.com/watch?v=uFBls5TX58g throat synthesizer]]<br />
<br />
Kevin Ho [[http://vimeo.com/30824146 Drum Machine]]<br />
<br />
Ben Broer [[http://www.youtube.com/watch?v=8lJWjlslPsc video]]<br />
<br />
Derek Mendez [[http://www.youtube.com/watch?v=xbtACOYErds video]]<br />
<br />
Derek Tingle [[http://www.youtube.com/watch?v=q3-iQEw_VVI video]]<br />
<br />
Evan Lee [[http://www.youtube.com/watch?v=QLmbAW12_M4 video]]<br />
<br />
===Lab 4: Haptics===<br />
* Lab 4 [https://ccrma.stanford.edu/wiki/250a_Haptics_Lab_2011 instructions]<br />
Pankaj Sharma [[http://www.youtube.com/watch?v=JBeD4RdeZKM video]]<br />
<br />
Jacob Wittenberg [[http://www.youtube.com/watch?v=uPkBv7CycVg video]]<br />
<br />
John Granzow [[http://www.youtube.com/watch?v=d0TLmPlDgzg video]]<br />
<br />
Hongchan Choi [[http://www.youtube.com/watch?v=tlbIHi1vbtk video]]<br />
<br />
Remington Wong [[http://www.youtube.com/watch?v=zLC9Cg-l-Yc video]]<br />
<br />
Joel Sadler [[http://www.youtube.com/watch?v=-z7r2d0A98w Haptic Claw]]<br />
<br />
Kevin Ho [[http://vimeo.com/31168521 video]]<br />
<br />
Ben Broer [[https://www.youtube.com/watch?v=r-Nlq9zfcqc&feature=youtu.be video]]<br />
<br />
Evan Lee [[http://www.youtube.com/watch?v=GbA-4-rudOQ video]]<br />
<br />
Derek Tingle [[http://www.youtube.com/user/derektingle?feature=mhee#p/a/u/2/5uUh83dZqC0 video]]<br />
<br />
<br />
===Lab 5: Gesture, Audio & Graphics===<br />
* Lab 5 [http://ccrma.stanford.edu/wiki/250a_Accelerometer_Lab instructions]<br />
<br />
Hongchan Choi [[http://www.youtube.com/watch?v=8CZIDNApLSk video]]<br />
<br />
Ben Broer [[https://www.youtube.com/watch?v=ueCBgus3z-M&feature=youtu.be video]]<br />
<br />
Remington Wong [[http://www.youtube.com/watch?v=ncrPLMhkYMs video]]<br />
<br />
Pankaj Sharma [[http://www.youtube.com/watch?v=S4oQzm4wDeg video]]<br />
<br />
Evan Lee [[http://www.youtube.com/watch?v=T00URZqPzcI video]]<br />
<br />
Jacob Wittenberg [[http://www.youtube.com/watch?v=6W3jtPXEVuk video]]<br />
<br />
John Granzow [[http://www.youtube.com/watch?v=qZ0E7mXzRyA video]]<br />
<br />
Derek Tingle [[http://www.youtube.com/watch?v=E0SKv31gEjM video]]<br />
<br />
Joel Sadler [[http://www.youtube.com/watch?v=wQbAICzy9ys Musical Hammer]]<br />
<br />
===Lab 6: Mini-Instrument===<br />
* Lab 6 [https://ccrma.stanford.edu/courses/250a/labs/lab6/ instructions]]<br />
Remington Wong [[http://www.youtube.com/watch?v=vR1GqXs9gTM video]]<br />
<br />
Ben Broer [[https://www.youtube.com/watch?v=pDRUGcEEfdU&feature=youtu.be video]]<br />
<br />
Kevin Ho [[http://vimeo.com/32194530 video]]<br />
<br />
Evan Lee [[http://youtu.be/pxXdjG-rHZE video]]<br />
<br />
Pankaj Sharma [[http://youtu.be/58cbhA-0RV4 video]]<br />
<br />
Jacob Wittenberg [[http://www.youtube.com/watch?v=RAjZRr28Rkc video]]<br />
<br />
John Granzow [[http://www.youtube.com/watch?v=QbrSh1O8wo0 video]]<br />
<br />
Derek Tingle [[http://www.youtube.com/watch?v=lWt0HTgfi_g IR Baton]]<br />
<br />
joel Sadler [[http://www.youtube.com/watch?v=7LuDcu5KZyY Sonic Brush]] [[http://youtu.be/hCTGyICHXA4 musical painting example]]</div>Srsmithhttps://ccrma.stanford.edu/mediawiki/index.php?title=Music_250a_Lab_Video_Wiki&diff=13792Music 250a Lab Video Wiki2012-11-02T18:35:28Z<p>Srsmith: </p>
<hr />
<div>== Music 250a - Autumn 2012 ==<br />
<br />
=== Lab 1: Making Music with Pd ===<br />
<br />
Romain Michon [[http://youtu.be/O9-llQIRhuI Simple Granular Synth]]<br />
<br />
Chris Beachy [[http://www.youtube.com/watch?v=HXyMCFmgbBU FM Synth]]<br />
<br />
Beau Silver [[http://youtu.be/AXp1_bZCHPU Keyboard Play 1]] - [[http://youtu.be/0U1S4c__XHQ Keyboard Play 2]]<br />
<br />
Afrooz Family [[https://ccrma.stanford.edu/~afrooz/250a/lab1/lab1.html Lab 1 Writeup]]<br />
<br />
Evan Gitterman [[http://www.youtube.com/watch?v=emXpfsXTJlI Experimenting with PD]]<br />
<br />
Jimmy Tobin [[http://www.youtube.com/watch?v=l9O8vbuH2mU Lab1]]<br />
<br />
=== Lab 2: Arduino & Sensors===<br />
<br />
Eoin Callery [[http://youtu.be/E5nOmCdYJBA Light and Bend Sensor]]<br />
<br />
Beau Silver [[http://youtu.be/WO5VIcRpzxk Button Drum Machine]]<br />
<br />
Jennifer Hsu [[http://youtu.be/45q9db2BM2M Feedback FM+]]<br />
<br />
Myles Borins [[https://secure.vimeo.com/51204737 FM experimentation]]<br />
<br />
Matt Alexander [[http://youtu.be/HYP8utzuWlU FM Modulated Synth]]<br />
<br />
Romain Michon [[http://youtu.be/PT161GsCnVw Clarinet Physical Model]]<br />
<br />
Tim O'Brien [[http://youtu.be/l5iXLTEbpU8 Squeezebox]]<br />
<br />
Priya Shekar [[http://youtu.be/bp3AHXZAkmI BeatTrip]]<br />
<br />
Cecilia Wu [[https://vimeo.com/51269334 FM Duet]] with the password cecilia<br />
<br />
Ivan Naranjo [[http://www.youtube.com/watch?v=9_GdbqExaJs Multiple Sines + FM]]<br />
<br />
Yoo Hsiu Yeh [[http://youtu.be/mRZGO_JQQaM Everybody Talks]]<br />
<br />
Evan Gitterman [[http://www.youtube.com/watch?v=UOcYBGYDcv0 Accelerometer LFO Synth]]<br />
<br />
Chris Beachy [[http://youtu.be/lf5ypub-RI4 Force Sensor Sine Wave]]<br />
<br />
Sarah Smith [[http://youtu.be/7miob1px2BI Force controlled ADSR]]<br />
<br />
Ilias Karim [[http://youtu.be/S8c6eYyxXno Scales]]<br />
<br />
Alice Fang [[http://youtu.be/QB3aSAz-ymo Modrum]]<br />
<br />
Helen Chavez [[http://www.youtube.com/watch?v=Xb2crIlJFfc&feature=youtu.be Weird Sounds: Phasors/Sine Waves]]<br />
<br />
Jimmy Tobin [[http://www.youtube.com/watch?v=R7AMOHZynos Under Pressure]]<br />
<br />
Lulu DeBoer "the clicker" [[http://www.youtube.com/watch?v=WnD0Ov-jSWs&feature=results_video]]<br />
<br />
Erin Baumann [[http://www.youtube.com/watch?v=DFB5m7vXPqQ&feature=youtu.be]]<br />
<br />
David Meisenholder [[http://youtu.be/y80B73SyCwE LED Band]]<br />
<br />
===Lab 3: Firmware Programming===<br />
<br />
Beau Silver [[http://youtu.be/_lGrsfMvXqw Air Piano]]<br />
<br />
Eoin Callery [[http://youtu.be/H5CrsirZp5o Angry Animal]]<br />
<br />
Tim O'Brien [[http://youtu.be/7yNdcolLj_A Guitars, sampled]]<br />
<br />
Jennifer Hsu [[http://youtu.be/92Xj1c4Ue_I things]]<br />
<br />
Sarah Smith [[http://youtu.be/iJo_xAvTrhs FM Modulated FM]]<br />
<br />
Iván Naranjo [[http://www.youtube.com/watch?v=a2Zdd9qnd-I Filtered Noise + Delayed Pick up mic on Computer]]<br />
<br />
Myles Borins [[https://secure.vimeo.com/51718468 Bellaccelerometer]]<br />
<br />
Helen Chavez [[http://www.youtube.com/watch?v=rKctnuCasD4 FM Synth Play]]<br />
<br />
Romain Michon [[http://youtu.be/8_8R3j3uXvw Voice Synthesizer]]<br />
<br />
Cecilia Wu [[https://vimeo.com/51733584 Crazy knob]] password cecilia<br />
<br />
Evan Gitterman [[http://youtu.be/yvfeRTqH4ls FFT & FM Kazoo]]<br />
<br />
Yoo Hsiu Yeh [[http://youtu.be/PogunVUr12o Happy Halloween]]<br />
<br />
Jimmy Tobin [[http://www.youtube.com/watch?v=FT8pmS4R2hw Dancing Flex Sensor]]<br />
<br />
Chris Beachy [[http://www.youtube.com/watch?v=dmlfarL4bTw&feature=youtu.be Chord Tunerator]]<br />
<br />
Ilias Karim [[http://youtu.be/BaV502vB3y0]]<br />
<br />
Priya Shekar [http://youtu.be/bp3AHXZAkmI BeatTrip v3]<br />
<br />
Erin Baumann [http://www.youtube.com/watch?v=9h1gY85rK2E&feature=youtu.be]<br />
<br />
Matt Alexander [[http://youtu.be/KY_aa1i0MaI Bells]]<br />
<br />
===Lab 4: Haptics===<br />
<br />
===Lab 5: Mini-Instrument ===<br />
<br />
Beau Silver [[http://youtu.be/Vkm3dQClwSU Solenoid Marimba]]<br />
<br />
Tim O'Brien [[http://youtu.be/nRecVlmUlQY The Helmet]]<br />
<br />
Jennifer Hsu [[http://youtu.be/aUCNgo-DrV4 hex]]<br />
<br />
Iván Naranjo [[http://youtu.be/ALejsFa5YeM Every Thought emits a Throw of Dice]]<br />
<br />
Yoo Hsiu Yeh [[http://youtu.be/pl_zzkVLBIs Anxious Cat Box]]<br />
<br />
Evan Gitterman [[https://www.youtube.com/watch?v=r1puTFQj2hs Headzoo]]<br />
<br />
Cecilia Wu [[https://vimeo.com/52676601 Tibetan Synth bowl]] password cecilia<br />
<br />
Romain Michon [[http://youtu.be/KKsi9Mr0jvw The Féraillophone]]<br />
<br />
Eoin Callery [[http://youtu.be/vWSJ5P7MuwM The utterly inappropriately named Smoking Phallus]]<br />
<br />
Ilias Karim [[http://youtu.be/NuC5woFR3LE Master Mic]]<br />
<br />
Sarah Smith [[http://youtu.be/x9iH_hDygyE Organ-Book]]<br />
<br />
===Lab 6: ???===<br />
<br />
<br />
<br />
<br />
<br />
== Music 250a - Autumn 2011 ==<br />
<br />
=== Lab 2: Arduino & Sensors===<br />
* Lab 2 [https://ccrma.stanford.edu/wiki/Talk:250a_Microcontroller_%26_Sensors_Lab_Pd instructions]<br />
<br />
Jacob Wittenberg [[http://www.youtube.com/watch?v=82P7cX4cAoE video]]<br />
<br />
John Granzow [[http://www.youtube.com/watch?v=Pkz5ymua5QQ video]]<br />
<br />
Kevin Ho [[http://vimeo.com/30418793 video]]<br />
<br />
Hongchang Choi [[http://www.youtube.com/watch?v=abNF1xXhNNk video]]<br />
<br />
Joel Sadler [[http://www.youtube.com/watch?v=hJYS5OI1maQ Thriller fo the FrakenSine~]]<br />
<br />
Pankaj Sharma [[http://www.youtube.com/watch?v=NLSJ39CWeUk Electric Tambura]]<br />
<br />
Ben Broer [[http://www.youtube.com/watch?v=Kg_aSxQ0BOE ReverBrain]]<br />
<br />
Remington Wong [[http://www.youtube.com/watch?v=JsjvaCs-q98 video]]<br />
<br />
Evan Lee [[http://www.youtube.com/watch?v=h5SfKpuqQDc video]]<br />
<br />
Jeff Rowell [[http://www.youtube.com/watch?v=6jzTjZkpX2Y video]]<br />
<br />
Derek Mendez [[http://www.youtube.com/watch?v=EeF-L1qO11g flex resistor]]<br />
<br />
Derek Tingle [[http://www.youtube.com/watch?v=wYBMpW_dYSg accelerometer controlled frequency modulator]]<br />
<br />
===Lab 3: Firmware Programming===<br />
* Lab 3 [http://ccrma.stanford.edu/wiki/250a_Firmware_Lab instructions]<br />
Hongchan Choi [[http://www.youtube.com/watch?v=MywdIkgEInY video]]<br />
<br />
Jacob Wittenberg [[http://www.youtube.com/watch?v=axt6v14mu30 video]]<br />
<br />
John Granzow [[http://www.youtube.com/watch?v=j5A-BfSRUEA ligeti poem/cowell rhythmicon]]<br />
<br />
Pankaj Sharma [[http://www.youtube.com/watch?v=W1GD_vhinp4 video]]<br />
<br />
Remington Wong [[http://www.youtube.com/watch?v=3O_7R_nO4po video]]<br />
<br />
Joel Sadler [[http://www.youtube.com/watch?v=uFBls5TX58g throat synthesizer]]<br />
<br />
Kevin Ho [[http://vimeo.com/30824146 Drum Machine]]<br />
<br />
Ben Broer [[http://www.youtube.com/watch?v=8lJWjlslPsc video]]<br />
<br />
Derek Mendez [[http://www.youtube.com/watch?v=xbtACOYErds video]]<br />
<br />
Derek Tingle [[http://www.youtube.com/watch?v=q3-iQEw_VVI video]]<br />
<br />
Evan Lee [[http://www.youtube.com/watch?v=QLmbAW12_M4 video]]<br />
<br />
===Lab 4: Haptics===<br />
* Lab 4 [https://ccrma.stanford.edu/wiki/250a_Haptics_Lab_2011 instructions]<br />
Pankaj Sharma [[http://www.youtube.com/watch?v=JBeD4RdeZKM video]]<br />
<br />
Jacob Wittenberg [[http://www.youtube.com/watch?v=uPkBv7CycVg video]]<br />
<br />
John Granzow [[http://www.youtube.com/watch?v=d0TLmPlDgzg video]]<br />
<br />
Hongchan Choi [[http://www.youtube.com/watch?v=tlbIHi1vbtk video]]<br />
<br />
Remington Wong [[http://www.youtube.com/watch?v=zLC9Cg-l-Yc video]]<br />
<br />
Joel Sadler [[http://www.youtube.com/watch?v=-z7r2d0A98w Haptic Claw]]<br />
<br />
Kevin Ho [[http://vimeo.com/31168521 video]]<br />
<br />
Ben Broer [[https://www.youtube.com/watch?v=r-Nlq9zfcqc&feature=youtu.be video]]<br />
<br />
Evan Lee [[http://www.youtube.com/watch?v=GbA-4-rudOQ video]]<br />
<br />
Derek Tingle [[http://www.youtube.com/user/derektingle?feature=mhee#p/a/u/2/5uUh83dZqC0 video]]<br />
<br />
<br />
===Lab 5: Gesture, Audio & Graphics===<br />
* Lab 5 [http://ccrma.stanford.edu/wiki/250a_Accelerometer_Lab instructions]<br />
<br />
Hongchan Choi [[http://www.youtube.com/watch?v=8CZIDNApLSk video]]<br />
<br />
Ben Broer [[https://www.youtube.com/watch?v=ueCBgus3z-M&feature=youtu.be video]]<br />
<br />
Remington Wong [[http://www.youtube.com/watch?v=ncrPLMhkYMs video]]<br />
<br />
Pankaj Sharma [[http://www.youtube.com/watch?v=S4oQzm4wDeg video]]<br />
<br />
Evan Lee [[http://www.youtube.com/watch?v=T00URZqPzcI video]]<br />
<br />
Jacob Wittenberg [[http://www.youtube.com/watch?v=6W3jtPXEVuk video]]<br />
<br />
John Granzow [[http://www.youtube.com/watch?v=qZ0E7mXzRyA video]]<br />
<br />
Derek Tingle [[http://www.youtube.com/watch?v=E0SKv31gEjM video]]<br />
<br />
Joel Sadler [[http://www.youtube.com/watch?v=wQbAICzy9ys Musical Hammer]]<br />
<br />
===Lab 6: Mini-Instrument===<br />
* Lab 6 [https://ccrma.stanford.edu/courses/250a/labs/lab6/ instructions]]<br />
Remington Wong [[http://www.youtube.com/watch?v=vR1GqXs9gTM video]]<br />
<br />
Ben Broer [[https://www.youtube.com/watch?v=pDRUGcEEfdU&feature=youtu.be video]]<br />
<br />
Kevin Ho [[http://vimeo.com/32194530 video]]<br />
<br />
Evan Lee [[http://youtu.be/pxXdjG-rHZE video]]<br />
<br />
Pankaj Sharma [[http://youtu.be/58cbhA-0RV4 video]]<br />
<br />
Jacob Wittenberg [[http://www.youtube.com/watch?v=RAjZRr28Rkc video]]<br />
<br />
John Granzow [[http://www.youtube.com/watch?v=QbrSh1O8wo0 video]]<br />
<br />
Derek Tingle [[http://www.youtube.com/watch?v=lWt0HTgfi_g IR Baton]]<br />
<br />
joel Sadler [[http://www.youtube.com/watch?v=7LuDcu5KZyY Sonic Brush]] [[http://youtu.be/hCTGyICHXA4 musical painting example]]</div>Srsmithhttps://ccrma.stanford.edu/mediawiki/index.php?title=Music_250a_Lab_Video_Wiki&diff=13740Music 250a Lab Video Wiki2012-10-26T23:22:10Z<p>Srsmith: </p>
<hr />
<div>== Music 250a - Autumn 2012 ==<br />
<br />
=== Lab 1: Making Music with Pd ===<br />
<br />
Romain Michon [[http://youtu.be/O9-llQIRhuI Simple Granular Synth]]<br />
<br />
Chris Beachy [[http://www.youtube.com/watch?v=HXyMCFmgbBU FM Synth]]<br />
<br />
Beau Silver [[http://youtu.be/AXp1_bZCHPU Keyboard Play 1]] - [[http://youtu.be/0U1S4c__XHQ Keyboard Play 2]]<br />
<br />
Afrooz Family [[https://ccrma.stanford.edu/~afrooz/250a/lab1/lab1.html Lab 1 Writeup]]<br />
<br />
Evan Gitterman [[http://www.youtube.com/watch?v=emXpfsXTJlI Experimenting with PD]]<br />
<br />
Jimmy Tobin [[http://www.youtube.com/watch?v=l9O8vbuH2mU Lab1]]<br />
<br />
=== Lab 2: Arduino & Sensors===<br />
<br />
Eoin Callery [[http://youtu.be/E5nOmCdYJBA Light and Bend Sensor]]<br />
<br />
Beau Silver [[http://youtu.be/WO5VIcRpzxk Button Drum Machine]]<br />
<br />
Jennifer Hsu [[http://youtu.be/45q9db2BM2M Feedback FM+]]<br />
<br />
Myles Borins [[https://secure.vimeo.com/51204737 FM experimentation]]<br />
<br />
Matt Alexander [[http://youtu.be/HYP8utzuWlU FM Modulated Synth]]<br />
<br />
Romain Michon [[http://youtu.be/PT161GsCnVw Clarinet Physical Model]]<br />
<br />
Tim O'Brien [[http://youtu.be/l5iXLTEbpU8 Squeezebox]]<br />
<br />
Priya Shekar [[http://youtu.be/bp3AHXZAkmI BeatTrip]]<br />
<br />
Cecilia Wu [[https://vimeo.com/51269334 FM Duet]] with the password cecilia<br />
<br />
Ivan Naranjo [[http://www.youtube.com/watch?v=9_GdbqExaJs Multiple Sines + FM]]<br />
<br />
Yoo Hsiu Yeh [[http://youtu.be/mRZGO_JQQaM Everybody Talks]]<br />
<br />
Evan Gitterman [[http://www.youtube.com/watch?v=UOcYBGYDcv0 Accelerometer LFO Synth]]<br />
<br />
Chris Beachy [[http://youtu.be/lf5ypub-RI4 Force Sensor Sine Wave]]<br />
<br />
Sarah Smith [[http://youtu.be/7miob1px2BI Force controlled ADSR]]<br />
<br />
Ilias Karim [[http://youtu.be/S8c6eYyxXno Scales]]<br />
<br />
Alice Fang [[http://youtu.be/QB3aSAz-ymo Modrum]]<br />
<br />
Helen Chavez [[http://www.youtube.com/watch?v=Xb2crIlJFfc&feature=youtu.be Weird Sounds: Phasors/Sine Waves]]<br />
<br />
Jimmy Tobin [[http://www.youtube.com/watch?v=R7AMOHZynos Under Pressure]]<br />
<br />
Lulu DeBoer "the clicker" [[http://www.youtube.com/watch?v=WnD0Ov-jSWs&feature=results_video]]<br />
<br />
Erin Baumann [[http://www.youtube.com/watch?v=DFB5m7vXPqQ&feature=youtu.be]]<br />
<br />
David Meisenholder [[http://youtu.be/y80B73SyCwE LED Band]]<br />
<br />
===Lab 3: Firmware Programming===<br />
<br />
Beau Silver [[http://youtu.be/_lGrsfMvXqw Air Piano]]<br />
<br />
Eoin Callery [[http://youtu.be/H5CrsirZp5o Angry Animal]]<br />
<br />
Tim O'Brien [[http://youtu.be/7yNdcolLj_A Guitars, sampled]]<br />
<br />
Jennifer Hsu [[http://youtu.be/92Xj1c4Ue_I things]]<br />
<br />
Sarah Smith [[http://youtu.be/iJo_xAvTrhs FM Modulated FM]]<br />
<br />
Iván Naranjo [[http://www.youtube.com/watch?v=a2Zdd9qnd-I Filtered Noise + Delayed Pick up mic on Computer]]<br />
<br />
Myles Borins [[https://secure.vimeo.com/51718468 Bellaccelerometer]]<br />
<br />
Helen Chavez [[http://www.youtube.com/watch?v=rKctnuCasD4 FM Synth Play]]<br />
<br />
Romain Michon [[http://youtu.be/8_8R3j3uXvw Voice Synthesizer]]<br />
<br />
Cecilia Wu [[https://vimeo.com/51733584 Crazy knob]] password cecilia<br />
<br />
Evan Gitterman [[http://youtu.be/yvfeRTqH4ls FFT & FM Kazoo]]<br />
<br />
Yoo Hsiu Yeh [[http://youtu.be/PogunVUr12o Happy Halloween]]<br />
<br />
Jimmy Tobin [[http://www.youtube.com/watch?v=FT8pmS4R2hw Dancing Flex Sensor]]<br />
<br />
Chris Beachy [[http://www.youtube.com/watch?v=dmlfarL4bTw&feature=youtu.be Chord Tunerator]]<br />
<br />
Ilias Karim [[http://youtu.be/BaV502vB3y0]]<br />
<br />
Priya Shekar [http://youtu.be/bp3AHXZAkmI BeatTrip v3]<br />
<br />
Erin Baumann [http://www.youtube.com/watch?v=9h1gY85rK2E&feature=youtu.be]<br />
<br />
Matt Alexander [[http://youtu.be/KY_aa1i0MaI Bells]]<br />
<br />
===Lab 4: Haptics===<br />
Lulu DeBoer [[http://www.youtube.com/watch?v=8h7AxVK1QOk&feature=plcp Pink Pluck]]<br />
<br />
Jennifer Hsu [[ http://youtu.be/67f8KabRgrM .-#~!£$pluuuuuuuuck$£!~#-.]]<br />
<br />
Eoin Callery [[https://www.youtube.com/watch?v=sjru_SVk3OA Swelling and Expanding Resonator]]<br />
<br />
Myles Borins [[https://secure.vimeo.com/52143969 how I learned to stop worrying and love haptic feedback]]<br />
<br />
Tim O'Brien [[http://youtu.be/Y9Euv1CC1m8 Lumpy String]]<br />
<br />
Beau Silver [[http://youtu.be/SgYW3zp1mWY Wind Chime]]<br />
<br />
Iván Naranjo [[http://www.youtube.com/watch?v=RUeYML5uUS8 guiro?/PluckedString?]]<br />
<br />
Helen Chavez [[http://www.youtube.com/watch?v=v7bAu-Xa-V4 Door Bells (playing with scales)]]<br />
<br />
Erin Baumann [[http://www.youtube.com/watch?v=TrVRtFqQ0t0&feature=youtu.be (Lab 4)]]<br />
<br />
Romain Michon [[http://youtu.be/g3w5VMV1pqk Karplus-Strong Harp]]<br />
<br />
Cecilia Wu [[https://vimeo.com/52213261 Bouncing Threshold]] password cecilia<br />
<br />
Evan Gitterman [[http://youtu.be/WhfO6VRoeE8 Scratchfader]]<br />
<br />
Alice Fang [[http://youtu.be/bcRzPvdvF2o Mary Christmas]]<br />
<br />
David Meisenholder [[http://youtu.be/gPEOsOrT1nA Bass Moves Me]] <br />
<br />
Jimmy Tobin [http://www.youtube.com/watch?v=fvmZwtrteCo| lab4]<br />
<br />
Ilias Karim [http://youtu.be/nqXYEM4U5ng | haptic beat matching]<br />
<br />
Yoo Hsiu Yeh [[http://youtu.be/_kJFXCtTCEw fur elise]] note - somehow before Jimmy Tobin's post, all the previous submissions got deleted. i pulled them out of the wiki history, but you might want to check that yours is correct.<br />
<br />
Sarah Smith [[http://youtu.be/Tsa66GIpijg Directional strumming instrument]]<br />
<br />
===Lab 5: Mini-Instrument ===<br />
===Lab 6: ???===<br />
<br />
== Music 250a - Autumn 2011 ==<br />
<br />
=== Lab 2: Arduino & Sensors===<br />
* Lab 2 [https://ccrma.stanford.edu/wiki/Talk:250a_Microcontroller_%26_Sensors_Lab_Pd instructions]<br />
<br />
Jacob Wittenberg [[http://www.youtube.com/watch?v=82P7cX4cAoE video]]<br />
<br />
John Granzow [[http://www.youtube.com/watch?v=Pkz5ymua5QQ video]]<br />
<br />
Kevin Ho [[http://vimeo.com/30418793 video]]<br />
<br />
Hongchang Choi [[http://www.youtube.com/watch?v=abNF1xXhNNk video]]<br />
<br />
Joel Sadler [[http://www.youtube.com/watch?v=hJYS5OI1maQ Thriller fo the FrakenSine~]]<br />
<br />
Pankaj Sharma [[http://www.youtube.com/watch?v=NLSJ39CWeUk Electric Tambura]]<br />
<br />
Ben Broer [[http://www.youtube.com/watch?v=Kg_aSxQ0BOE ReverBrain]]<br />
<br />
Remington Wong [[http://www.youtube.com/watch?v=JsjvaCs-q98 video]]<br />
<br />
Evan Lee [[http://www.youtube.com/watch?v=h5SfKpuqQDc video]]<br />
<br />
Jeff Rowell [[http://www.youtube.com/watch?v=6jzTjZkpX2Y video]]<br />
<br />
Derek Mendez [[http://www.youtube.com/watch?v=EeF-L1qO11g flex resistor]]<br />
<br />
Derek Tingle [[http://www.youtube.com/watch?v=wYBMpW_dYSg accelerometer controlled frequency modulator]]<br />
<br />
===Lab 3: Firmware Programming===<br />
* Lab 3 [http://ccrma.stanford.edu/wiki/250a_Firmware_Lab instructions]<br />
Hongchan Choi [[http://www.youtube.com/watch?v=MywdIkgEInY video]]<br />
<br />
Jacob Wittenberg [[http://www.youtube.com/watch?v=axt6v14mu30 video]]<br />
<br />
John Granzow [[http://www.youtube.com/watch?v=j5A-BfSRUEA ligeti poem/cowell rhythmicon]]<br />
<br />
Pankaj Sharma [[http://www.youtube.com/watch?v=W1GD_vhinp4 video]]<br />
<br />
Remington Wong [[http://www.youtube.com/watch?v=3O_7R_nO4po video]]<br />
<br />
Joel Sadler [[http://www.youtube.com/watch?v=uFBls5TX58g throat synthesizer]]<br />
<br />
Kevin Ho [[http://vimeo.com/30824146 Drum Machine]]<br />
<br />
Ben Broer [[http://www.youtube.com/watch?v=8lJWjlslPsc video]]<br />
<br />
Derek Mendez [[http://www.youtube.com/watch?v=xbtACOYErds video]]<br />
<br />
Derek Tingle [[http://www.youtube.com/watch?v=q3-iQEw_VVI video]]<br />
<br />
Evan Lee [[http://www.youtube.com/watch?v=QLmbAW12_M4 video]]<br />
<br />
===Lab 4: Haptics===<br />
* Lab 4 [https://ccrma.stanford.edu/wiki/250a_Haptics_Lab_2011 instructions]<br />
Pankaj Sharma [[http://www.youtube.com/watch?v=JBeD4RdeZKM video]]<br />
<br />
Jacob Wittenberg [[http://www.youtube.com/watch?v=uPkBv7CycVg video]]<br />
<br />
John Granzow [[http://www.youtube.com/watch?v=d0TLmPlDgzg video]]<br />
<br />
Hongchan Choi [[http://www.youtube.com/watch?v=tlbIHi1vbtk video]]<br />
<br />
Remington Wong [[http://www.youtube.com/watch?v=zLC9Cg-l-Yc video]]<br />
<br />
Joel Sadler [[http://www.youtube.com/watch?v=-z7r2d0A98w Haptic Claw]]<br />
<br />
Kevin Ho [[http://vimeo.com/31168521 video]]<br />
<br />
Ben Broer [[https://www.youtube.com/watch?v=r-Nlq9zfcqc&feature=youtu.be video]]<br />
<br />
Evan Lee [[http://www.youtube.com/watch?v=GbA-4-rudOQ video]]<br />
<br />
Derek Tingle [[http://www.youtube.com/user/derektingle?feature=mhee#p/a/u/2/5uUh83dZqC0 video]]<br />
<br />
<br />
===Lab 5: Gesture, Audio & Graphics===<br />
* Lab 5 [http://ccrma.stanford.edu/wiki/250a_Accelerometer_Lab instructions]<br />
<br />
Hongchan Choi [[http://www.youtube.com/watch?v=8CZIDNApLSk video]]<br />
<br />
Ben Broer [[https://www.youtube.com/watch?v=ueCBgus3z-M&feature=youtu.be video]]<br />
<br />
Remington Wong [[http://www.youtube.com/watch?v=ncrPLMhkYMs video]]<br />
<br />
Pankaj Sharma [[http://www.youtube.com/watch?v=S4oQzm4wDeg video]]<br />
<br />
Evan Lee [[http://www.youtube.com/watch?v=T00URZqPzcI video]]<br />
<br />
Jacob Wittenberg [[http://www.youtube.com/watch?v=6W3jtPXEVuk video]]<br />
<br />
John Granzow [[http://www.youtube.com/watch?v=qZ0E7mXzRyA video]]<br />
<br />
Derek Tingle [[http://www.youtube.com/watch?v=E0SKv31gEjM video]]<br />
<br />
Joel Sadler [[http://www.youtube.com/watch?v=wQbAICzy9ys Musical Hammer]]<br />
<br />
===Lab 6: Mini-Instrument===<br />
* Lab 6 [https://ccrma.stanford.edu/courses/250a/labs/lab6/ instructions]]<br />
Remington Wong [[http://www.youtube.com/watch?v=vR1GqXs9gTM video]]<br />
<br />
Ben Broer [[https://www.youtube.com/watch?v=pDRUGcEEfdU&feature=youtu.be video]]<br />
<br />
Kevin Ho [[http://vimeo.com/32194530 video]]<br />
<br />
Evan Lee [[http://youtu.be/pxXdjG-rHZE video]]<br />
<br />
Pankaj Sharma [[http://youtu.be/58cbhA-0RV4 video]]<br />
<br />
Jacob Wittenberg [[http://www.youtube.com/watch?v=RAjZRr28Rkc video]]<br />
<br />
John Granzow [[http://www.youtube.com/watch?v=QbrSh1O8wo0 video]]<br />
<br />
Derek Tingle [[http://www.youtube.com/watch?v=lWt0HTgfi_g IR Baton]]<br />
<br />
joel Sadler [[http://www.youtube.com/watch?v=7LuDcu5KZyY Sonic Brush]] [[http://youtu.be/hCTGyICHXA4 musical painting example]]</div>Srsmithhttps://ccrma.stanford.edu/mediawiki/index.php?title=Music_250a_Lab_Video_Wiki&diff=13606Music 250a Lab Video Wiki2012-10-18T19:04:01Z<p>Srsmith: </p>
<hr />
<div>== Music 250a - Autumn 2012 ==<br />
<br />
=== Lab 1: Making Music with Pd ===<br />
<br />
Romain Michon [[http://youtu.be/O9-llQIRhuI Simple Granular Synth]]<br />
<br />
Chris Beachy [[http://www.youtube.com/watch?v=HXyMCFmgbBU FM Synth]]<br />
<br />
Beau Silver [[http://youtu.be/AXp1_bZCHPU Keyboard Play 1]] - [[http://youtu.be/0U1S4c__XHQ Keyboard Play 2]]<br />
<br />
Afrooz Family [[https://ccrma.stanford.edu/~afrooz/250a/lab1/lab1.html Lab 1 Writeup]]<br />
<br />
Evan Gitterman [[http://www.youtube.com/watch?v=emXpfsXTJlI Experimenting with PD]]<br />
<br />
=== Lab 2: Arduino & Sensors===<br />
<br />
Eoin Callery [[http://youtu.be/E5nOmCdYJBA Light and Bend Sensor]]<br />
<br />
Beau Silver [[http://youtu.be/WO5VIcRpzxk Button Drum Machine]]<br />
<br />
Jennifer Hsu [[http://youtu.be/45q9db2BM2M Feedback FM+]]<br />
<br />
Myles Borins [[https://secure.vimeo.com/51204737 FM experimentation]]<br />
<br />
Matt Alexander [[http://youtu.be/HYP8utzuWlU FM Modulated Synth]]<br />
<br />
Romain Michon [[http://youtu.be/PT161GsCnVw Clarinet Physical Model]]<br />
<br />
Tim O'Brien [[http://youtu.be/l5iXLTEbpU8 Squeezebox]]<br />
<br />
Priya Shekar [[http://youtu.be/bp3AHXZAkmI BeatTrip]]<br />
<br />
Cecilia Wu [[https://vimeo.com/51269334 FM Duet]] with the password cecilia<br />
<br />
Ivan Naranjo [[http://www.youtube.com/watch?v=9_GdbqExaJs Multiple Sines + FM]]<br />
<br />
Yoo Hsiu Yeh [[http://youtu.be/mRZGO_JQQaM Everybody Talks]]<br />
<br />
Evan Gitterman [[http://www.youtube.com/watch?v=UOcYBGYDcv0 Accelerometer LFO Synth]]<br />
<br />
Chris Beachy [[http://youtu.be/lf5ypub-RI4 Force Sensor Sine Wave]]<br />
<br />
Sarah Smith [[http://youtu.be/7miob1px2BI Force controlled ADSR]]<br />
<br />
Ilias Karim [[http://youtu.be/S8c6eYyxXno Scales]]<br />
<br />
Alice Fang [[http://youtu.be/QB3aSAz-ymo Modrum]]<br />
<br />
Helen Chavez [[http://www.youtube.com/watch?v=Xb2crIlJFfc&feature=youtu.be Weird Sounds: Phasors/Sine Waves]]<br />
<br />
===Lab 3: Firmware Programming===<br />
<br />
Beau Silver [[http://youtu.be/_lGrsfMvXqw Air Piano]]<br />
<br />
Tim O'Brien [[http://youtu.be/7yNdcolLj_A Guitars, sampled]]<br />
<br />
Jennifer Hsu [[http://youtu.be/92Xj1c4Ue_I things]]<br />
<br />
Sarah Smith [[http://youtu.be/iJo_xAvTrhs FM Modulated FM]]<br />
<br />
===Lab 4: Haptics===<br />
===Lab 5: Gesture, Audio & Graphics===<br />
===Lab 6: Mini-Instrument===<br />
<br />
== Music 250a - Autumn 2011 ==<br />
<br />
=== Lab 2: Arduino & Sensors===<br />
* Lab 2 [https://ccrma.stanford.edu/wiki/Talk:250a_Microcontroller_%26_Sensors_Lab_Pd instructions]<br />
<br />
Jacob Wittenberg [[http://www.youtube.com/watch?v=82P7cX4cAoE video]]<br />
<br />
John Granzow [[http://www.youtube.com/watch?v=Pkz5ymua5QQ video]]<br />
<br />
Kevin Ho [[http://vimeo.com/30418793 video]]<br />
<br />
Hongchang Choi [[http://www.youtube.com/watch?v=abNF1xXhNNk video]]<br />
<br />
Joel Sadler [[http://www.youtube.com/watch?v=hJYS5OI1maQ Thriller fo the FrakenSine~]]<br />
<br />
Pankaj Sharma [[http://www.youtube.com/watch?v=NLSJ39CWeUk Electric Tambura]]<br />
<br />
Ben Broer [[http://www.youtube.com/watch?v=Kg_aSxQ0BOE ReverBrain]]<br />
<br />
Remington Wong [[http://www.youtube.com/watch?v=JsjvaCs-q98 video]]<br />
<br />
Evan Lee [[http://www.youtube.com/watch?v=h5SfKpuqQDc video]]<br />
<br />
Jeff Rowell [[http://www.youtube.com/watch?v=6jzTjZkpX2Y video]]<br />
<br />
Derek Mendez [[http://www.youtube.com/watch?v=EeF-L1qO11g flex resistor]]<br />
<br />
Derek Tingle [[http://www.youtube.com/watch?v=wYBMpW_dYSg accelerometer controlled frequency modulator]]<br />
<br />
Lulu DeBoer "the clicker" [[http://www.youtube.com/watch?v=WnD0Ov-jSWs&feature=results_video]]<br />
<br />
===Lab 3: Firmware Programming===<br />
* Lab 3 [http://ccrma.stanford.edu/wiki/250a_Firmware_Lab instructions]<br />
Hongchan Choi [[http://www.youtube.com/watch?v=MywdIkgEInY video]]<br />
<br />
Jacob Wittenberg [[http://www.youtube.com/watch?v=axt6v14mu30 video]]<br />
<br />
John Granzow [[http://www.youtube.com/watch?v=j5A-BfSRUEA ligeti poem/cowell rhythmicon]]<br />
<br />
Pankaj Sharma [[http://www.youtube.com/watch?v=W1GD_vhinp4 video]]<br />
<br />
Remington Wong [[http://www.youtube.com/watch?v=3O_7R_nO4po video]]<br />
<br />
Joel Sadler [[http://www.youtube.com/watch?v=uFBls5TX58g throat synthesizer]]<br />
<br />
Kevin Ho [[http://vimeo.com/30824146 Drum Machine]]<br />
<br />
Ben Broer [[http://www.youtube.com/watch?v=8lJWjlslPsc video]]<br />
<br />
Derek Mendez [[http://www.youtube.com/watch?v=xbtACOYErds video]]<br />
<br />
Derek Tingle [[http://www.youtube.com/watch?v=q3-iQEw_VVI video]]<br />
<br />
Evan Lee [[http://www.youtube.com/watch?v=QLmbAW12_M4 video]]<br />
<br />
===Lab 4: Haptics===<br />
* Lab 4 [https://ccrma.stanford.edu/wiki/250a_Haptics_Lab_2011 instructions]<br />
Pankaj Sharma [[http://www.youtube.com/watch?v=JBeD4RdeZKM video]]<br />
<br />
Jacob Wittenberg [[http://www.youtube.com/watch?v=uPkBv7CycVg video]]<br />
<br />
John Granzow [[http://www.youtube.com/watch?v=d0TLmPlDgzg video]]<br />
<br />
Hongchan Choi [[http://www.youtube.com/watch?v=tlbIHi1vbtk video]]<br />
<br />
Remington Wong [[http://www.youtube.com/watch?v=zLC9Cg-l-Yc video]]<br />
<br />
Joel Sadler [[http://www.youtube.com/watch?v=-z7r2d0A98w Haptic Claw]]<br />
<br />
Kevin Ho [[http://vimeo.com/31168521 video]]<br />
<br />
Ben Broer [[https://www.youtube.com/watch?v=r-Nlq9zfcqc&feature=youtu.be video]]<br />
<br />
Evan Lee [[http://www.youtube.com/watch?v=GbA-4-rudOQ video]]<br />
<br />
Derek Tingle [[http://www.youtube.com/user/derektingle?feature=mhee#p/a/u/2/5uUh83dZqC0 video]]<br />
<br />
===Lab 5: Gesture, Audio & Graphics===<br />
* Lab 5 [http://ccrma.stanford.edu/wiki/250a_Accelerometer_Lab instructions]<br />
<br />
Hongchan Choi [[http://www.youtube.com/watch?v=8CZIDNApLSk video]]<br />
<br />
Ben Broer [[https://www.youtube.com/watch?v=ueCBgus3z-M&feature=youtu.be video]]<br />
<br />
Remington Wong [[http://www.youtube.com/watch?v=ncrPLMhkYMs video]]<br />
<br />
Pankaj Sharma [[http://www.youtube.com/watch?v=S4oQzm4wDeg video]]<br />
<br />
Evan Lee [[http://www.youtube.com/watch?v=T00URZqPzcI video]]<br />
<br />
Jacob Wittenberg [[http://www.youtube.com/watch?v=6W3jtPXEVuk video]]<br />
<br />
John Granzow [[http://www.youtube.com/watch?v=qZ0E7mXzRyA video]]<br />
<br />
Derek Tingle [[http://www.youtube.com/watch?v=E0SKv31gEjM video]]<br />
<br />
Joel Sadler [[http://www.youtube.com/watch?v=wQbAICzy9ys Musical Hammer]]<br />
<br />
===Lab 6: Mini-Instrument===<br />
* Lab 6 [https://ccrma.stanford.edu/courses/250a/labs/lab6/ instructions]]<br />
Remington Wong [[http://www.youtube.com/watch?v=vR1GqXs9gTM video]]<br />
<br />
Ben Broer [[https://www.youtube.com/watch?v=pDRUGcEEfdU&feature=youtu.be video]]<br />
<br />
Kevin Ho [[http://vimeo.com/32194530 video]]<br />
<br />
Evan Lee [[http://youtu.be/pxXdjG-rHZE video]]<br />
<br />
Pankaj Sharma [[http://youtu.be/58cbhA-0RV4 video]]<br />
<br />
Jacob Wittenberg [[http://www.youtube.com/watch?v=RAjZRr28Rkc video]]<br />
<br />
John Granzow [[http://www.youtube.com/watch?v=QbrSh1O8wo0 video]]<br />
<br />
Derek Tingle [[http://www.youtube.com/watch?v=lWt0HTgfi_g IR Baton]]<br />
<br />
joel Sadler [[http://www.youtube.com/watch?v=7LuDcu5KZyY Sonic Brush]] [[http://youtu.be/hCTGyICHXA4 musical painting example]]</div>Srsmithhttps://ccrma.stanford.edu/mediawiki/index.php?title=Music_250a_Lab_Video_Wiki&diff=13551Music 250a Lab Video Wiki2012-10-13T07:23:03Z<p>Srsmith: /* Lab 2: Arduino & Sensors */</p>
<hr />
<div>== Music 250a - Autumn 2012 ==<br />
<br />
=== Lab 1: Making Music with Pd ===<br />
<br />
Romain Michon [[http://youtu.be/O9-llQIRhuI Simple Granular Synth]]<br />
<br />
Chris Beachy [[http://www.youtube.com/watch?v=HXyMCFmgbBU FM Synth]]<br />
<br />
Beau Silver [[http://youtu.be/AXp1_bZCHPU Keyboard Play 1]] - [[http://youtu.be/0U1S4c__XHQ Keyboard Play 2]]<br />
<br />
Afrooz Family [[https://ccrma.stanford.edu/~afrooz/250a/lab1/lab1.html Lab 1 Writeup]]<br />
<br />
Evan Gitterman [[http://www.youtube.com/watch?v=emXpfsXTJlI Experimenting with PD]]<br />
<br />
=== Lab 2: Arduino & Sensors===<br />
<br />
Eoin Callery [[http://youtu.be/E5nOmCdYJBA Light and Bend Sensor]]<br />
<br />
Beau Silver [[http://youtu.be/WO5VIcRpzxk Button Drum Machine]]<br />
<br />
Jennifer Hsu [[http://youtu.be/45q9db2BM2M Feedback FM+]]<br />
<br />
Myles Borins [[https://secure.vimeo.com/51204737 FM experimentation]]<br />
<br />
Matt Alexander [[http://youtu.be/HYP8utzuWlU FM Modulated Synth]]<br />
<br />
Romain Michon [[http://youtu.be/PT161GsCnVw Clarinet Physical Model]]<br />
<br />
Tim O'Brien [[http://youtu.be/l5iXLTEbpU8 Squeezebox]]<br />
<br />
Priya Shekar [[http://youtu.be/bp3AHXZAkmI BeatTrip]]<br />
<br />
Cecilia Wu [[https://vimeo.com/51269334 FM Duet]] with the password cecilia<br />
<br />
Ivan Naranjo [[http://www.youtube.com/watch?v=9_GdbqExaJs Multiple Sines + FM]]<br />
<br />
Yoo Hsiu Yeh [[http://youtu.be/mRZGO_JQQaM Everybody Talks]]<br />
<br />
Evan Gitterman [[http://www.youtube.com/watch?v=UOcYBGYDcv0 Accelerometer LFO Synth]]<br />
<br />
Chris Beachy [[http://youtu.be/lf5ypub-RI4 Force Sensor Sine Wave]]<br />
<br />
Sarah Smith [[http://youtu.be/7miob1px2BI Force controlled ADSR]]<br />
<br />
===Lab 3: Firmware Programming===<br />
===Lab 4: Haptics===<br />
===Lab 5: Gesture, Audio & Graphics===<br />
===Lab 6: Mini-Instrument===<br />
<br />
== Music 250a - Autumn 2011 ==<br />
<br />
=== Lab 2: Arduino & Sensors===<br />
* Lab 2 [https://ccrma.stanford.edu/wiki/Talk:250a_Microcontroller_%26_Sensors_Lab_Pd instructions]<br />
<br />
Jacob Wittenberg [[http://www.youtube.com/watch?v=82P7cX4cAoE video]]<br />
<br />
John Granzow [[http://www.youtube.com/watch?v=Pkz5ymua5QQ video]]<br />
<br />
Kevin Ho [[http://vimeo.com/30418793 video]]<br />
<br />
Hongchang Choi [[http://www.youtube.com/watch?v=abNF1xXhNNk video]]<br />
<br />
Joel Sadler [[http://www.youtube.com/watch?v=hJYS5OI1maQ Thriller fo the FrakenSine~]]<br />
<br />
Pankaj Sharma [[http://www.youtube.com/watch?v=NLSJ39CWeUk Electric Tambura]]<br />
<br />
Ben Broer [[http://www.youtube.com/watch?v=Kg_aSxQ0BOE ReverBrain]]<br />
<br />
Remington Wong [[http://www.youtube.com/watch?v=JsjvaCs-q98 video]]<br />
<br />
Evan Lee [[http://www.youtube.com/watch?v=h5SfKpuqQDc video]]<br />
<br />
Jeff Rowell [[http://www.youtube.com/watch?v=6jzTjZkpX2Y video]]<br />
<br />
Derek Mendez [[http://www.youtube.com/watch?v=EeF-L1qO11g flex resistor]]<br />
<br />
Derek Tingle [[http://www.youtube.com/watch?v=wYBMpW_dYSg accelerometer controlled frequency modulator]]<br />
<br />
===Lab 3: Firmware Programming===<br />
* Lab 3 [http://ccrma.stanford.edu/wiki/250a_Firmware_Lab instructions]<br />
Hongchan Choi [[http://www.youtube.com/watch?v=MywdIkgEInY video]]<br />
<br />
Jacob Wittenberg [[http://www.youtube.com/watch?v=axt6v14mu30 video]]<br />
<br />
John Granzow [[http://www.youtube.com/watch?v=j5A-BfSRUEA ligeti poem/cowell rhythmicon]]<br />
<br />
Pankaj Sharma [[http://www.youtube.com/watch?v=W1GD_vhinp4 video]]<br />
<br />
Remington Wong [[http://www.youtube.com/watch?v=3O_7R_nO4po video]]<br />
<br />
Joel Sadler [[http://www.youtube.com/watch?v=uFBls5TX58g throat synthesizer]]<br />
<br />
Kevin Ho [[http://vimeo.com/30824146 Drum Machine]]<br />
<br />
Ben Broer [[http://www.youtube.com/watch?v=8lJWjlslPsc video]]<br />
<br />
Derek Mendez [[http://www.youtube.com/watch?v=xbtACOYErds video]]<br />
<br />
Derek Tingle [[http://www.youtube.com/watch?v=q3-iQEw_VVI video]]<br />
<br />
Evan Lee [[http://www.youtube.com/watch?v=QLmbAW12_M4 video]]<br />
<br />
===Lab 4: Haptics===<br />
* Lab 4 [https://ccrma.stanford.edu/wiki/250a_Haptics_Lab_2011 instructions]<br />
Pankaj Sharma [[http://www.youtube.com/watch?v=JBeD4RdeZKM video]]<br />
<br />
Jacob Wittenberg [[http://www.youtube.com/watch?v=uPkBv7CycVg video]]<br />
<br />
John Granzow [[http://www.youtube.com/watch?v=d0TLmPlDgzg video]]<br />
<br />
Hongchan Choi [[http://www.youtube.com/watch?v=tlbIHi1vbtk video]]<br />
<br />
Remington Wong [[http://www.youtube.com/watch?v=zLC9Cg-l-Yc video]]<br />
<br />
Joel Sadler [[http://www.youtube.com/watch?v=-z7r2d0A98w Haptic Claw]]<br />
<br />
Kevin Ho [[http://vimeo.com/31168521 video]]<br />
<br />
Ben Broer [[https://www.youtube.com/watch?v=r-Nlq9zfcqc&feature=youtu.be video]]<br />
<br />
Evan Lee [[http://www.youtube.com/watch?v=GbA-4-rudOQ video]]<br />
<br />
Derek Tingle [[http://www.youtube.com/user/derektingle?feature=mhee#p/a/u/2/5uUh83dZqC0 video]]<br />
<br />
===Lab 5: Gesture, Audio & Graphics===<br />
* Lab 5 [http://ccrma.stanford.edu/wiki/250a_Accelerometer_Lab instructions]<br />
<br />
Hongchan Choi [[http://www.youtube.com/watch?v=8CZIDNApLSk video]]<br />
<br />
Ben Broer [[https://www.youtube.com/watch?v=ueCBgus3z-M&feature=youtu.be video]]<br />
<br />
Remington Wong [[http://www.youtube.com/watch?v=ncrPLMhkYMs video]]<br />
<br />
Pankaj Sharma [[http://www.youtube.com/watch?v=S4oQzm4wDeg video]]<br />
<br />
Evan Lee [[http://www.youtube.com/watch?v=T00URZqPzcI video]]<br />
<br />
Jacob Wittenberg [[http://www.youtube.com/watch?v=6W3jtPXEVuk video]]<br />
<br />
John Granzow [[http://www.youtube.com/watch?v=qZ0E7mXzRyA video]]<br />
<br />
Derek Tingle [[http://www.youtube.com/watch?v=E0SKv31gEjM video]]<br />
<br />
Joel Sadler [[http://www.youtube.com/watch?v=wQbAICzy9ys Musical Hammer]]<br />
<br />
===Lab 6: Mini-Instrument===<br />
* Lab 6 [https://ccrma.stanford.edu/courses/250a/labs/lab6/ instructions]]<br />
Remington Wong [[http://www.youtube.com/watch?v=vR1GqXs9gTM video]]<br />
<br />
Ben Broer [[https://www.youtube.com/watch?v=pDRUGcEEfdU&feature=youtu.be video]]<br />
<br />
Kevin Ho [[http://vimeo.com/32194530 video]]<br />
<br />
Evan Lee [[http://youtu.be/pxXdjG-rHZE video]]<br />
<br />
Pankaj Sharma [[http://youtu.be/58cbhA-0RV4 video]]<br />
<br />
Jacob Wittenberg [[http://www.youtube.com/watch?v=RAjZRr28Rkc video]]<br />
<br />
John Granzow [[http://www.youtube.com/watch?v=QbrSh1O8wo0 video]]<br />
<br />
Derek Tingle [[http://www.youtube.com/watch?v=lWt0HTgfi_g IR Baton]]<br />
<br />
joel Sadler [[http://www.youtube.com/watch?v=7LuDcu5KZyY Sonic Brush]] [[http://youtu.be/hCTGyICHXA4 musical painting example]]</div>Srsmithhttps://ccrma.stanford.edu/mediawiki/index.php?title=2D_materials_2012&diff=135222D materials 20122012-10-10T18:48:21Z<p>Srsmith: </p>
<hr />
<div>The following materials are already taken for HW3 in Music 250A Autumn 2012:<br><br><br />
Cecilia: Bamboo -Bamboo - BLONDE<br><br />
Myles: Bamboo - Amber (not 3 ply) http://www.ponoko.com/make-and-sell/show-material/83-bamboo-amber<br><br />
Ilias: Acrylic - Fluoro Orange<br><br />
Tim O'Brien: Acrylic Mirror<br><br />
Priya Shekar: Acrylic Purple Tint<br><br />
Sarah Smith: Acrylic Green</div>Srsmithhttps://ccrma.stanford.edu/mediawiki/index.php?title=2D_materials_2012&diff=135212D materials 20122012-10-10T18:48:08Z<p>Srsmith: </p>
<hr />
<div>The following materials are already taken for HW3 in Music 250A Autumn 2012:<br><br><br />
Cecilia: Bamboo -Bamboo - BLONDE<br><br />
Myles: Bamboo - Amber (not 3 ply) http://www.ponoko.com/make-and-sell/show-material/83-bamboo-amber<br><br />
Ilias: Acrylic - Fluoro Orange<br><br />
Tim O'Brien: Acrylic Mirror<br><br />
Priya Shekar: Acrylic Purple Tint<br />
Sarah Smith: Acrylic Green</div>Srsmithhttps://ccrma.stanford.edu/mediawiki/index.php?title=User:Srsmith&diff=13200User:Srsmith2012-06-13T08:47:32Z<p>Srsmith: /* Weekly Updates: */</p>
<hr />
<div>== Music 220C Project Page: Algorithmic Counterpoint Using Markov Models ==<br />
<br />
My project for this quarter is a real time system for algorithmic counterpoint. The system takes an audio input (in this case cello) and generates upper voices in good counterpoint. For the final project I am hoping to generate 2 upper voices above the cello line in real time: one in first species and another in second species. In order to do this, it may be necessary to make some assumptions regarding the input line (i.e. Tonal, Length of line) due to some of the rules relating to how things must end to remain in good counterpoint. While the real time generation of the two lines is the core of the system, I would ideally like to make it into a performance system that could be more interactive and allow the cellist to play something other than the bass line. One possible implementation of this would be to assume that the bass line continues to repeat and then allow the performer to add their own line/embellish the bass line as desired without interfering with the counterpoint engine.<br />
<br />
<br />
== Weekly Updates: ==<br />
06/13/2012 - Final<br />
Work for this quarter is finished. A website with more formal documentation will be up soon, but I have uploaded the code and other files in the meantime <br />
<br />
[http://ccrma.stanford.edu/~srsmith/CountGen/CountGen.tgz Source Code and compiled Linux executable]<br />
<br />
[http://ccrma.stanford.edu/~srsmith/CountGen/CountGen-Final-Presentation-Slides.pdf Slides from today's presentation]<br />
<br />
[http://ccrma.stanford.edu/~srsmith/CountGen/Markov-Tables.xlsx Color coded version of the Markov tables, with some descriptions]<br />
<br />
[http://ccrma.stanford.edu/~srsmith/CountGen/Examples.pdf Score containing some sample outputs]<br />
06/01/2012 - Week 8 <br />
Less work got done this week as a result of preparing for and helping with the spring concert. Still working on getting the pitch and note change detection stabilizes and implementing a pitshift to generate the output sound. I will also probably make some final tweeks to the Markov tables.<br />
<br />
05/23/2012 - Week 7<br />
<br />
Counterpoint Algorithm is finished and implemented. There are minor improvements that could still be made, but it works well. For the remainder of the quarter I plan to focus my efforts on the real time audio processing aspects of the system. <br />
<br />
This week I spent a fair amount of time testing the system with a variety of cantus firmi, running each many times. In total, there were 64 runs of the system, and it only once got to a situation that it couldn't resolve a possibility for. This particular scenario (leaping to the note two notes before the end, eliminating the possibility of getting to an appropriate penultimate note while also resolving the leap) is something that could be changed to be allowed in only these dire circumstances. While allowing the melodic motion to continue in the direction of the leap is something that is not allowed in this musical style, the current default is for the system to remain on the same note when faced with no good options - potentially resulting in a clashing sonority. The preferable action would probably be to move to a consonant pitch in anticipation of the cadence. <br />
<br />
I have also started re-incorporating the pitch tracking functions that were originally in DuetYourself. They are not fully debugged yet for use with a cello, but it should be doable. Once the pitch tracking works, I plan to use a pitch shifted version of the input to generate the output sound. <br />
<br />
05/14/2012 - Week 6.5<br />
<br />
I finally have most of the basic system up and running now. The same bugs in the synthesis still exist, but I have imported the Markov tables and can generate reasonable counterpoint with a few exceptions.<br />
<br />
Counterpoint issues to be implemented/Adjusted Fixed<br />
- Add resolution of tendency tones - this is not included in the current model and definitely needs to be<br />
- Add the cadencing structure - This table has been imported but is just not being used at the moment as other issues are being debugged<br />
- Start up: There's something weird about how I am generating the state for the first pitch. Even the first note has an assigned "previous pitch" which is used to calculate pitch #2. It would be nice to not have this necessarily, and not use information about the preceding melodic table for the first interval (this shouldn't be too hard) <br />
- The solutions at the moment are very deterministic for some reason although I cant figure out why. There are also some cases where i get an error for no possible solutions although there should be some...<br />
<br />
Synthesis/Audio Related Issues<br />
- Now that the Markov system is running, I hope to return my attention to issues of audio output and eventually input. <br />
- I am hoping to have the cello input working in time for the final presentation, but am still not sure whether this will happen or not. The issues of pitch tracking, etc have so far taken a back burner to the calculation issues. Now that things are starting to work more smoothly, this is something I want to try again. <br />
<br />
Goals for Next Week:<br />
For next week, I would like to have an output synthesis of some form that is working and letting me hear these things. Also I hope to incorporate the cadence table and tweak some of the other issues. <br />
<br />
05/04/2012 - Week 5 <br />
<br />
This week was mostly spent working with the C++ code to configure it to use input from the command line as midi pitches to calculate the line in counterpoint. I also spent time working on getting a good output sound for the system, but encountered some bugs along the way. Additionally, I was able to finish populating values into the markov tables that I am using. For next week, I hope to have these bugs fixed and more of the code done to import the Markov tables and use them to choose the next pitch. <br />
<br />
04/25/2012 - Week 4<br />
<br />
First, this week I created a bibliography of the literature that I have been using as a basis for this research. This bibliography will be updated as the project continues. Its current version can be found here (insert link). Also, due to ongoing issues with the real time interface and pitch tracking, I have created a chuck program that allows me to input numbers with the laptop keyboard that correspond to midi pitches and will output both the input and output midi numbers. This structure is the same as the function that will eventually be used for a system with real time audio input, but allows for easier debugging. <br />
<br />
This week, My work was mainly focused on developing a general layout for the Markov tables that will be used. The proposed system will now use a set of 4 different tables. The first one (table 1) implements rules regarding acceptable melodic intervals and conventions for their sequencing. As such, it takes the most recent melodic interval in the generated line and provides a probability for each of the possible following intervals. <br />
<br />
A second table implements limits on the melodic range at any given point, disallowing intervals that will result in a note outside of the melody range. This range is generally limited to Major 9th, however this range narrows towards the end off the phrases, in order to steer the melody towards the region it needs to be at for the cadence. (Table 2)<br />
<br />
Finally the renainging rules will be implemented in a three dimensional table that relates both the current harmonic interval and the upcoming melodic interval in the bass to the possible upcoming interval in the melody. This structure allows for conrol of voice leading rules such as parallel fifths and octaves, as well as preferencing things such as contrary motion and avoidance of certain other cases. The main challenge with this implementation is that this table will be quite large (approx 14 x 17 x 7 = NNN elements). In order to do this, I propose to generate this table somewhat by hand, but using the penalty values in Schottstaedt and converting these to probabilities. For the prohibitions and Major errors (penalty infinite and 200 respectively) a probability of 0 will be used. For Errors with penalty 100 a probability of 0.001 will be used. This should prevent these cases from being used unless they are the only option. For the remaining errors the penalty is converted to a probability by a function of Prob = 1 - Penalty/100. I don't know yet whether this will work well, but it seems like a reasonable starting point for generating this table. <br />
<br />
<br />
<br />
04/18/2012 - Week 2 & 3<br />
<br />
Work this week was divided between two areas. First, I spent some time reworking the code from DuetYourself to only include the functionality that I would like to expand upon for this project (the additional generated tonal line) and removed some of the other functionality from this version. I also tested the pitch tracking with a cello input (it didn't work) and started working on ways to fix this issue. I am hoping that by using a longer input buffer (needed for low pitches) and moving the calculations (currently an autocorrelation) into the frequency domain I can improve the accuracy. If this doesn't work, some additional filtering may be needed to distinguish the fundamental from the harmonics, but hopefully the longer buffer will be sufficient to still do this computation in real time. In order to facilitate testing of this system, I am also working on adding the option of using a wav file as input per Chris' suggestion. This is not fully implemented yet, but is in the works. I also recorded some sample melody lines on the cello that can be used for testing purposes.<br />
<br />
The second portion of work this week was devoted to trying to bring myself up to date on the existing literature regarding algorithmic counterpoint. Somewhat surprisingly, I did not find much at all on methods for doing this in real time. Almost all of the literature approached the problem as harmonizing with a given bass line, where the entire bass line is known in advance. Most of this research involved either a genetic algorithm or a forward working method of generating melody options, combined with some sort of scoring system to rank the possible outputs. In contrast to this area of literature, there has also been some work done on using Markov models to complete the task, which seems more applicable to the real time application that I envision.<br />
<br />
Finally, the overall project design was fleshed out with more specifics regarding what I would like to be able to implement this quarter.<br />
<br />
For next week, I hope to finish the baseline code with working pitch tracking, wav file input, and include the option to have more than one generated counterpoint voice. I would also like to use the time to decide on how I want to implement the counterpoint problem in my system.<br />
<br />
04/07/2012 - Week 1<br />
<br />
This week I decided on the general project area (algorithmic counterpoint) and laid out a general vision for what I would like the final project to look like. I think it should be a real time performance tool that will take audio input (from my cello for the current purposes) that will be used as a base line for the counterpoint. The system will then attempt to generate upper lines (hopefully 2 of them) in good counterpoint with the input.<br />
<br />
In considering the scope of the project, It is likely that I will need to place some constraints on the input bass line (such as restricting it to cadence at the downbeat of every 8 bars, and remain within a single key), though I don't yet know exactly what these will be.<br />
<br />
The system will be based partially on work I did in the fall quarter for music 256a developing a system that could generate a consonant counterpoint line for a given input, although it was not strict counterpoint, and did not generally result in a particularly musical output. For more information see [https://ccrma.stanford.edu/~srsmith/256a/DuetYourself DuetYourself].<br />
<br />
For next week, I plan to work on revising the relevant parts of DuetYourself to create a baseline set of code for this project. In particular, I want to add the ability to read in audio from a file instead of the live microphone input, for use in development. Related to this, I also want to record a few selections of cello playing that can be used as test inputs to the system. Finally, I want to look more into the existing work in this area (of which there seems to be a lot) to figure out how to go about the counterpoint generation.</div>Srsmithhttps://ccrma.stanford.edu/mediawiki/index.php?title=User:Srsmith&diff=13199User:Srsmith2012-06-13T08:36:17Z<p>Srsmith: /* Weekly Updates: */</p>
<hr />
<div>== Music 220C Project Page: Algorithmic Counterpoint Using Markov Models ==<br />
<br />
My project for this quarter is a real time system for algorithmic counterpoint. The system takes an audio input (in this case cello) and generates upper voices in good counterpoint. For the final project I am hoping to generate 2 upper voices above the cello line in real time: one in first species and another in second species. In order to do this, it may be necessary to make some assumptions regarding the input line (i.e. Tonal, Length of line) due to some of the rules relating to how things must end to remain in good counterpoint. While the real time generation of the two lines is the core of the system, I would ideally like to make it into a performance system that could be more interactive and allow the cellist to play something other than the bass line. One possible implementation of this would be to assume that the bass line continues to repeat and then allow the performer to add their own line/embellish the bass line as desired without interfering with the counterpoint engine.<br />
<br />
<br />
== Weekly Updates: ==<br />
06/13/2012 - Final<br />
Work for this quarter is finished. A website with more formal documentation will be up soon, but I have uploaded the code and other files in the meantime <br />
[http://ccrma.stanford.edu/~srsmith/CountGen/CountGen.tgz Source Code and compiled Linux executable]<br />
<br />
[http://ccrma.stanford.edu/~srsmith/CountGen/CountGen-Final-Presentation-Slides.pdf Slides from today's presentation]<br />
<br />
[http://ccrma.stanford.edu/~srsmith/CountGen/Markov-Tables.xlsx Color coded version of the Markov tables, with some descriptions]<br />
<br />
06/01/2012 - Week 8 <br />
Less work got done this week as a result of preparing for and helping with the spring concert. Still working on getting the pitch and note change detection stabilizes and implementing a pitshift to generate the output sound. I will also probably make some final tweeks to the Markov tables.<br />
<br />
05/23/2012 - Week 7<br />
<br />
Counterpoint Algorithm is finished and implemented. There are minor improvements that could still be made, but it works well. For the remainder of the quarter I plan to focus my efforts on the real time audio processing aspects of the system. <br />
<br />
This week I spent a fair amount of time testing the system with a variety of cantus firmi, running each many times. In total, there were 64 runs of the system, and it only once got to a situation that it couldn't resolve a possibility for. This particular scenario (leaping to the note two notes before the end, eliminating the possibility of getting to an appropriate penultimate note while also resolving the leap) is something that could be changed to be allowed in only these dire circumstances. While allowing the melodic motion to continue in the direction of the leap is something that is not allowed in this musical style, the current default is for the system to remain on the same note when faced with no good options - potentially resulting in a clashing sonority. The preferable action would probably be to move to a consonant pitch in anticipation of the cadence. <br />
<br />
I have also started re-incorporating the pitch tracking functions that were originally in DuetYourself. They are not fully debugged yet for use with a cello, but it should be doable. Once the pitch tracking works, I plan to use a pitch shifted version of the input to generate the output sound. <br />
<br />
05/14/2012 - Week 6.5<br />
<br />
I finally have most of the basic system up and running now. The same bugs in the synthesis still exist, but I have imported the Markov tables and can generate reasonable counterpoint with a few exceptions.<br />
<br />
Counterpoint issues to be implemented/Adjusted Fixed<br />
- Add resolution of tendency tones - this is not included in the current model and definitely needs to be<br />
- Add the cadencing structure - This table has been imported but is just not being used at the moment as other issues are being debugged<br />
- Start up: There's something weird about how I am generating the state for the first pitch. Even the first note has an assigned "previous pitch" which is used to calculate pitch #2. It would be nice to not have this necessarily, and not use information about the preceding melodic table for the first interval (this shouldn't be too hard) <br />
- The solutions at the moment are very deterministic for some reason although I cant figure out why. There are also some cases where i get an error for no possible solutions although there should be some...<br />
<br />
Synthesis/Audio Related Issues<br />
- Now that the Markov system is running, I hope to return my attention to issues of audio output and eventually input. <br />
- I am hoping to have the cello input working in time for the final presentation, but am still not sure whether this will happen or not. The issues of pitch tracking, etc have so far taken a back burner to the calculation issues. Now that things are starting to work more smoothly, this is something I want to try again. <br />
<br />
Goals for Next Week:<br />
For next week, I would like to have an output synthesis of some form that is working and letting me hear these things. Also I hope to incorporate the cadence table and tweak some of the other issues. <br />
<br />
05/04/2012 - Week 5 <br />
<br />
This week was mostly spent working with the C++ code to configure it to use input from the command line as midi pitches to calculate the line in counterpoint. I also spent time working on getting a good output sound for the system, but encountered some bugs along the way. Additionally, I was able to finish populating values into the markov tables that I am using. For next week, I hope to have these bugs fixed and more of the code done to import the Markov tables and use them to choose the next pitch. <br />
<br />
04/25/2012 - Week 4<br />
<br />
First, this week I created a bibliography of the literature that I have been using as a basis for this research. This bibliography will be updated as the project continues. Its current version can be found here (insert link). Also, due to ongoing issues with the real time interface and pitch tracking, I have created a chuck program that allows me to input numbers with the laptop keyboard that correspond to midi pitches and will output both the input and output midi numbers. This structure is the same as the function that will eventually be used for a system with real time audio input, but allows for easier debugging. <br />
<br />
This week, My work was mainly focused on developing a general layout for the Markov tables that will be used. The proposed system will now use a set of 4 different tables. The first one (table 1) implements rules regarding acceptable melodic intervals and conventions for their sequencing. As such, it takes the most recent melodic interval in the generated line and provides a probability for each of the possible following intervals. <br />
<br />
A second table implements limits on the melodic range at any given point, disallowing intervals that will result in a note outside of the melody range. This range is generally limited to Major 9th, however this range narrows towards the end off the phrases, in order to steer the melody towards the region it needs to be at for the cadence. (Table 2)<br />
<br />
Finally the renainging rules will be implemented in a three dimensional table that relates both the current harmonic interval and the upcoming melodic interval in the bass to the possible upcoming interval in the melody. This structure allows for conrol of voice leading rules such as parallel fifths and octaves, as well as preferencing things such as contrary motion and avoidance of certain other cases. The main challenge with this implementation is that this table will be quite large (approx 14 x 17 x 7 = NNN elements). In order to do this, I propose to generate this table somewhat by hand, but using the penalty values in Schottstaedt and converting these to probabilities. For the prohibitions and Major errors (penalty infinite and 200 respectively) a probability of 0 will be used. For Errors with penalty 100 a probability of 0.001 will be used. This should prevent these cases from being used unless they are the only option. For the remaining errors the penalty is converted to a probability by a function of Prob = 1 - Penalty/100. I don't know yet whether this will work well, but it seems like a reasonable starting point for generating this table. <br />
<br />
<br />
<br />
04/18/2012 - Week 2 & 3<br />
<br />
Work this week was divided between two areas. First, I spent some time reworking the code from DuetYourself to only include the functionality that I would like to expand upon for this project (the additional generated tonal line) and removed some of the other functionality from this version. I also tested the pitch tracking with a cello input (it didn't work) and started working on ways to fix this issue. I am hoping that by using a longer input buffer (needed for low pitches) and moving the calculations (currently an autocorrelation) into the frequency domain I can improve the accuracy. If this doesn't work, some additional filtering may be needed to distinguish the fundamental from the harmonics, but hopefully the longer buffer will be sufficient to still do this computation in real time. In order to facilitate testing of this system, I am also working on adding the option of using a wav file as input per Chris' suggestion. This is not fully implemented yet, but is in the works. I also recorded some sample melody lines on the cello that can be used for testing purposes.<br />
<br />
The second portion of work this week was devoted to trying to bring myself up to date on the existing literature regarding algorithmic counterpoint. Somewhat surprisingly, I did not find much at all on methods for doing this in real time. Almost all of the literature approached the problem as harmonizing with a given bass line, where the entire bass line is known in advance. Most of this research involved either a genetic algorithm or a forward working method of generating melody options, combined with some sort of scoring system to rank the possible outputs. In contrast to this area of literature, there has also been some work done on using Markov models to complete the task, which seems more applicable to the real time application that I envision.<br />
<br />
Finally, the overall project design was fleshed out with more specifics regarding what I would like to be able to implement this quarter.<br />
<br />
For next week, I hope to finish the baseline code with working pitch tracking, wav file input, and include the option to have more than one generated counterpoint voice. I would also like to use the time to decide on how I want to implement the counterpoint problem in my system.<br />
<br />
04/07/2012 - Week 1<br />
<br />
This week I decided on the general project area (algorithmic counterpoint) and laid out a general vision for what I would like the final project to look like. I think it should be a real time performance tool that will take audio input (from my cello for the current purposes) that will be used as a base line for the counterpoint. The system will then attempt to generate upper lines (hopefully 2 of them) in good counterpoint with the input.<br />
<br />
In considering the scope of the project, It is likely that I will need to place some constraints on the input bass line (such as restricting it to cadence at the downbeat of every 8 bars, and remain within a single key), though I don't yet know exactly what these will be.<br />
<br />
The system will be based partially on work I did in the fall quarter for music 256a developing a system that could generate a consonant counterpoint line for a given input, although it was not strict counterpoint, and did not generally result in a particularly musical output. For more information see [https://ccrma.stanford.edu/~srsmith/256a/DuetYourself DuetYourself].<br />
<br />
For next week, I plan to work on revising the relevant parts of DuetYourself to create a baseline set of code for this project. In particular, I want to add the ability to read in audio from a file instead of the live microphone input, for use in development. Related to this, I also want to record a few selections of cello playing that can be used as test inputs to the system. Finally, I want to look more into the existing work in this area (of which there seems to be a lot) to figure out how to go about the counterpoint generation.</div>Srsmithhttps://ccrma.stanford.edu/mediawiki/index.php?title=User:Srsmith&diff=13102User:Srsmith2012-06-04T18:35:46Z<p>Srsmith: /* Weekly Updates: */</p>
<hr />
<div>== Music 220C Project Page: Algorithmic Counterpoint Using Markov Models ==<br />
<br />
My project for this quarter is a real time system for algorithmic counterpoint. The system takes an audio input (in this case cello) and generates upper voices in good counterpoint. For the final project I am hoping to generate 2 upper voices above the cello line in real time: one in first species and another in second species. In order to do this, it may be necessary to make some assumptions regarding the input line (i.e. Tonal, Length of line) due to some of the rules relating to how things must end to remain in good counterpoint. While the real time generation of the two lines is the core of the system, I would ideally like to make it into a performance system that could be more interactive and allow the cellist to play something other than the bass line. One possible implementation of this would be to assume that the bass line continues to repeat and then allow the performer to add their own line/embellish the bass line as desired without interfering with the counterpoint engine.<br />
<br />
<br />
== Weekly Updates: ==<br />
06/01/2012 - Week 8 <br />
Less work got done this week as a result of preparing for and helping with the spring concert. Still working on getting the pitch and note change detection stabilizes and implementing a pitshift to generate the output sound. I will also probably make some final tweeks to the Markov tables.<br />
<br />
05/23/2012 - Week 7<br />
<br />
Counterpoint Algorithm is finished and implemented. There are minor improvements that could still be made, but it works well. For the remainder of the quarter I plan to focus my efforts on the real time audio processing aspects of the system. <br />
<br />
This week I spent a fair amount of time testing the system with a variety of cantus firmi, running each many times. In total, there were 64 runs of the system, and it only once got to a situation that it couldn't resolve a possibility for. This particular scenario (leaping to the note two notes before the end, eliminating the possibility of getting to an appropriate penultimate note while also resolving the leap) is something that could be changed to be allowed in only these dire circumstances. While allowing the melodic motion to continue in the direction of the leap is something that is not allowed in this musical style, the current default is for the system to remain on the same note when faced with no good options - potentially resulting in a clashing sonority. The preferable action would probably be to move to a consonant pitch in anticipation of the cadence. <br />
<br />
I have also started re-incorporating the pitch tracking functions that were originally in DuetYourself. They are not fully debugged yet for use with a cello, but it should be doable. Once the pitch tracking works, I plan to use a pitch shifted version of the input to generate the output sound. <br />
<br />
05/14/2012 - Week 6.5<br />
<br />
I finally have most of the basic system up and running now. The same bugs in the synthesis still exist, but I have imported the Markov tables and can generate reasonable counterpoint with a few exceptions.<br />
<br />
Counterpoint issues to be implemented/Adjusted Fixed<br />
- Add resolution of tendency tones - this is not included in the current model and definitely needs to be<br />
- Add the cadencing structure - This table has been imported but is just not being used at the moment as other issues are being debugged<br />
- Start up: There's something weird about how I am generating the state for the first pitch. Even the first note has an assigned "previous pitch" which is used to calculate pitch #2. It would be nice to not have this necessarily, and not use information about the preceding melodic table for the first interval (this shouldn't be too hard) <br />
- The solutions at the moment are very deterministic for some reason although I cant figure out why. There are also some cases where i get an error for no possible solutions although there should be some...<br />
<br />
Synthesis/Audio Related Issues<br />
- Now that the Markov system is running, I hope to return my attention to issues of audio output and eventually input. <br />
- I am hoping to have the cello input working in time for the final presentation, but am still not sure whether this will happen or not. The issues of pitch tracking, etc have so far taken a back burner to the calculation issues. Now that things are starting to work more smoothly, this is something I want to try again. <br />
<br />
Goals for Next Week:<br />
For next week, I would like to have an output synthesis of some form that is working and letting me hear these things. Also I hope to incorporate the cadence table and tweak some of the other issues. <br />
<br />
05/04/2012 - Week 5 <br />
<br />
This week was mostly spent working with the C++ code to configure it to use input from the command line as midi pitches to calculate the line in counterpoint. I also spent time working on getting a good output sound for the system, but encountered some bugs along the way. Additionally, I was able to finish populating values into the markov tables that I am using. For next week, I hope to have these bugs fixed and more of the code done to import the Markov tables and use them to choose the next pitch. <br />
<br />
04/25/2012 - Week 4<br />
<br />
First, this week I created a bibliography of the literature that I have been using as a basis for this research. This bibliography will be updated as the project continues. Its current version can be found here (insert link). Also, due to ongoing issues with the real time interface and pitch tracking, I have created a chuck program that allows me to input numbers with the laptop keyboard that correspond to midi pitches and will output both the input and output midi numbers. This structure is the same as the function that will eventually be used for a system with real time audio input, but allows for easier debugging. <br />
<br />
This week, My work was mainly focused on developing a general layout for the Markov tables that will be used. The proposed system will now use a set of 4 different tables. The first one (table 1) implements rules regarding acceptable melodic intervals and conventions for their sequencing. As such, it takes the most recent melodic interval in the generated line and provides a probability for each of the possible following intervals. <br />
<br />
A second table implements limits on the melodic range at any given point, disallowing intervals that will result in a note outside of the melody range. This range is generally limited to Major 9th, however this range narrows towards the end off the phrases, in order to steer the melody towards the region it needs to be at for the cadence. (Table 2)<br />
<br />
Finally the renainging rules will be implemented in a three dimensional table that relates both the current harmonic interval and the upcoming melodic interval in the bass to the possible upcoming interval in the melody. This structure allows for conrol of voice leading rules such as parallel fifths and octaves, as well as preferencing things such as contrary motion and avoidance of certain other cases. The main challenge with this implementation is that this table will be quite large (approx 14 x 17 x 7 = NNN elements). In order to do this, I propose to generate this table somewhat by hand, but using the penalty values in Schottstaedt and converting these to probabilities. For the prohibitions and Major errors (penalty infinite and 200 respectively) a probability of 0 will be used. For Errors with penalty 100 a probability of 0.001 will be used. This should prevent these cases from being used unless they are the only option. For the remaining errors the penalty is converted to a probability by a function of Prob = 1 - Penalty/100. I don't know yet whether this will work well, but it seems like a reasonable starting point for generating this table. <br />
<br />
<br />
<br />
04/18/2012 - Week 2 & 3<br />
<br />
Work this week was divided between two areas. First, I spent some time reworking the code from DuetYourself to only include the functionality that I would like to expand upon for this project (the additional generated tonal line) and removed some of the other functionality from this version. I also tested the pitch tracking with a cello input (it didn't work) and started working on ways to fix this issue. I am hoping that by using a longer input buffer (needed for low pitches) and moving the calculations (currently an autocorrelation) into the frequency domain I can improve the accuracy. If this doesn't work, some additional filtering may be needed to distinguish the fundamental from the harmonics, but hopefully the longer buffer will be sufficient to still do this computation in real time. In order to facilitate testing of this system, I am also working on adding the option of using a wav file as input per Chris' suggestion. This is not fully implemented yet, but is in the works. I also recorded some sample melody lines on the cello that can be used for testing purposes.<br />
<br />
The second portion of work this week was devoted to trying to bring myself up to date on the existing literature regarding algorithmic counterpoint. Somewhat surprisingly, I did not find much at all on methods for doing this in real time. Almost all of the literature approached the problem as harmonizing with a given bass line, where the entire bass line is known in advance. Most of this research involved either a genetic algorithm or a forward working method of generating melody options, combined with some sort of scoring system to rank the possible outputs. In contrast to this area of literature, there has also been some work done on using Markov models to complete the task, which seems more applicable to the real time application that I envision.<br />
<br />
Finally, the overall project design was fleshed out with more specifics regarding what I would like to be able to implement this quarter.<br />
<br />
For next week, I hope to finish the baseline code with working pitch tracking, wav file input, and include the option to have more than one generated counterpoint voice. I would also like to use the time to decide on how I want to implement the counterpoint problem in my system.<br />
<br />
04/07/2012 - Week 1<br />
<br />
This week I decided on the general project area (algorithmic counterpoint) and laid out a general vision for what I would like the final project to look like. I think it should be a real time performance tool that will take audio input (from my cello for the current purposes) that will be used as a base line for the counterpoint. The system will then attempt to generate upper lines (hopefully 2 of them) in good counterpoint with the input.<br />
<br />
In considering the scope of the project, It is likely that I will need to place some constraints on the input bass line (such as restricting it to cadence at the downbeat of every 8 bars, and remain within a single key), though I don't yet know exactly what these will be.<br />
<br />
The system will be based partially on work I did in the fall quarter for music 256a developing a system that could generate a consonant counterpoint line for a given input, although it was not strict counterpoint, and did not generally result in a particularly musical output. For more information see [https://ccrma.stanford.edu/~srsmith/256a/DuetYourself DuetYourself].<br />
<br />
For next week, I plan to work on revising the relevant parts of DuetYourself to create a baseline set of code for this project. In particular, I want to add the ability to read in audio from a file instead of the live microphone input, for use in development. Related to this, I also want to record a few selections of cello playing that can be used as test inputs to the system. Finally, I want to look more into the existing work in this area (of which there seems to be a lot) to figure out how to go about the counterpoint generation.</div>Srsmithhttps://ccrma.stanford.edu/mediawiki/index.php?title=User:Srsmith&diff=13087User:Srsmith2012-05-24T02:01:11Z<p>Srsmith: /* Weekly Updates: */</p>
<hr />
<div>== Music 220C Project Page: Algorithmic Counterpoint Using Markov Models ==<br />
<br />
My project for this quarter is a real time system for algorithmic counterpoint. The system takes an audio input (in this case cello) and generates upper voices in good counterpoint. For the final project I am hoping to generate 2 upper voices above the cello line in real time: one in first species and another in second species. In order to do this, it may be necessary to make some assumptions regarding the input line (i.e. Tonal, Length of line) due to some of the rules relating to how things must end to remain in good counterpoint. While the real time generation of the two lines is the core of the system, I would ideally like to make it into a performance system that could be more interactive and allow the cellist to play something other than the bass line. One possible implementation of this would be to assume that the bass line continues to repeat and then allow the performer to add their own line/embellish the bass line as desired without interfering with the counterpoint engine.<br />
<br />
<br />
== Weekly Updates: ==<br />
05/23/2012 - Week 7<br />
<br />
Counterpoint Algorithm is finished and implemented. There are minor improvements that could still be made, but it works well. For the remainder of the quarter I plan to focus my efforts on the real time audio processing aspects of the system. <br />
<br />
This week I spent a fair amount of time testing the system with a variety of cantus firmi, running each many times. In total, there were 64 runs of the system, and it only once got to a situation that it couldn't resolve a possibility for. This particular scenario (leaping to the note two notes before the end, eliminating the possibility of getting to an appropriate penultimate note while also resolving the leap) is something that could be changed to be allowed in only these dire circumstances. While allowing the melodic motion to continue in the direction of the leap is something that is not allowed in this musical style, the current default is for the system to remain on the same note when faced with no good options - potentially resulting in a clashing sonority. The preferable action would probably be to move to a consonant pitch in anticipation of the cadence. <br />
<br />
I have also started re-incorporating the pitch tracking functions that were originally in DuetYourself. They are not fully debugged yet for use with a cello, but it should be doable. Once the pitch tracking works, I plan to use a pitch shifted version of the input to generate the output sound. <br />
<br />
05/14/2012 - Week 6.5<br />
<br />
I finally have most of the basic system up and running now. The same bugs in the synthesis still exist, but I have imported the Markov tables and can generate reasonable counterpoint with a few exceptions.<br />
<br />
Counterpoint issues to be implemented/Adjusted Fixed<br />
- Add resolution of tendency tones - this is not included in the current model and definitely needs to be<br />
- Add the cadencing structure - This table has been imported but is just not being used at the moment as other issues are being debugged<br />
- Start up: There's something weird about how I am generating the state for the first pitch. Even the first note has an assigned "previous pitch" which is used to calculate pitch #2. It would be nice to not have this necessarily, and not use information about the preceding melodic table for the first interval (this shouldn't be too hard) <br />
- The solutions at the moment are very deterministic for some reason although I cant figure out why. There are also some cases where i get an error for no possible solutions although there should be some...<br />
<br />
Synthesis/Audio Related Issues<br />
- Now that the Markov system is running, I hope to return my attention to issues of audio output and eventually input. <br />
- I am hoping to have the cello input working in time for the final presentation, but am still not sure whether this will happen or not. The issues of pitch tracking, etc have so far taken a back burner to the calculation issues. Now that things are starting to work more smoothly, this is something I want to try again. <br />
<br />
Goals for Next Week:<br />
For next week, I would like to have an output synthesis of some form that is working and letting me hear these things. Also I hope to incorporate the cadence table and tweak some of the other issues. <br />
<br />
05/04/2012 - Week 5 <br />
<br />
This week was mostly spent working with the C++ code to configure it to use input from the command line as midi pitches to calculate the line in counterpoint. I also spent time working on getting a good output sound for the system, but encountered some bugs along the way. Additionally, I was able to finish populating values into the markov tables that I am using. For next week, I hope to have these bugs fixed and more of the code done to import the Markov tables and use them to choose the next pitch. <br />
<br />
04/25/2012 - Week 4<br />
<br />
First, this week I created a bibliography of the literature that I have been using as a basis for this research. This bibliography will be updated as the project continues. Its current version can be found here (insert link). Also, due to ongoing issues with the real time interface and pitch tracking, I have created a chuck program that allows me to input numbers with the laptop keyboard that correspond to midi pitches and will output both the input and output midi numbers. This structure is the same as the function that will eventually be used for a system with real time audio input, but allows for easier debugging. <br />
<br />
This week, My work was mainly focused on developing a general layout for the Markov tables that will be used. The proposed system will now use a set of 4 different tables. The first one (table 1) implements rules regarding acceptable melodic intervals and conventions for their sequencing. As such, it takes the most recent melodic interval in the generated line and provides a probability for each of the possible following intervals. <br />
<br />
A second table implements limits on the melodic range at any given point, disallowing intervals that will result in a note outside of the melody range. This range is generally limited to Major 9th, however this range narrows towards the end off the phrases, in order to steer the melody towards the region it needs to be at for the cadence. (Table 2)<br />
<br />
Finally the renainging rules will be implemented in a three dimensional table that relates both the current harmonic interval and the upcoming melodic interval in the bass to the possible upcoming interval in the melody. This structure allows for conrol of voice leading rules such as parallel fifths and octaves, as well as preferencing things such as contrary motion and avoidance of certain other cases. The main challenge with this implementation is that this table will be quite large (approx 14 x 17 x 7 = NNN elements). In order to do this, I propose to generate this table somewhat by hand, but using the penalty values in Schottstaedt and converting these to probabilities. For the prohibitions and Major errors (penalty infinite and 200 respectively) a probability of 0 will be used. For Errors with penalty 100 a probability of 0.001 will be used. This should prevent these cases from being used unless they are the only option. For the remaining errors the penalty is converted to a probability by a function of Prob = 1 - Penalty/100. I don't know yet whether this will work well, but it seems like a reasonable starting point for generating this table. <br />
<br />
<br />
<br />
04/18/2012 - Week 2 & 3<br />
<br />
Work this week was divided between two areas. First, I spent some time reworking the code from DuetYourself to only include the functionality that I would like to expand upon for this project (the additional generated tonal line) and removed some of the other functionality from this version. I also tested the pitch tracking with a cello input (it didn't work) and started working on ways to fix this issue. I am hoping that by using a longer input buffer (needed for low pitches) and moving the calculations (currently an autocorrelation) into the frequency domain I can improve the accuracy. If this doesn't work, some additional filtering may be needed to distinguish the fundamental from the harmonics, but hopefully the longer buffer will be sufficient to still do this computation in real time. In order to facilitate testing of this system, I am also working on adding the option of using a wav file as input per Chris' suggestion. This is not fully implemented yet, but is in the works. I also recorded some sample melody lines on the cello that can be used for testing purposes.<br />
<br />
The second portion of work this week was devoted to trying to bring myself up to date on the existing literature regarding algorithmic counterpoint. Somewhat surprisingly, I did not find much at all on methods for doing this in real time. Almost all of the literature approached the problem as harmonizing with a given bass line, where the entire bass line is known in advance. Most of this research involved either a genetic algorithm or a forward working method of generating melody options, combined with some sort of scoring system to rank the possible outputs. In contrast to this area of literature, there has also been some work done on using Markov models to complete the task, which seems more applicable to the real time application that I envision.<br />
<br />
Finally, the overall project design was fleshed out with more specifics regarding what I would like to be able to implement this quarter.<br />
<br />
For next week, I hope to finish the baseline code with working pitch tracking, wav file input, and include the option to have more than one generated counterpoint voice. I would also like to use the time to decide on how I want to implement the counterpoint problem in my system.<br />
<br />
04/07/2012 - Week 1<br />
<br />
This week I decided on the general project area (algorithmic counterpoint) and laid out a general vision for what I would like the final project to look like. I think it should be a real time performance tool that will take audio input (from my cello for the current purposes) that will be used as a base line for the counterpoint. The system will then attempt to generate upper lines (hopefully 2 of them) in good counterpoint with the input.<br />
<br />
In considering the scope of the project, It is likely that I will need to place some constraints on the input bass line (such as restricting it to cadence at the downbeat of every 8 bars, and remain within a single key), though I don't yet know exactly what these will be.<br />
<br />
The system will be based partially on work I did in the fall quarter for music 256a developing a system that could generate a consonant counterpoint line for a given input, although it was not strict counterpoint, and did not generally result in a particularly musical output. For more information see [https://ccrma.stanford.edu/~srsmith/256a/DuetYourself DuetYourself].<br />
<br />
For next week, I plan to work on revising the relevant parts of DuetYourself to create a baseline set of code for this project. In particular, I want to add the ability to read in audio from a file instead of the live microphone input, for use in development. Related to this, I also want to record a few selections of cello playing that can be used as test inputs to the system. Finally, I want to look more into the existing work in this area (of which there seems to be a lot) to figure out how to go about the counterpoint generation.</div>Srsmithhttps://ccrma.stanford.edu/mediawiki/index.php?title=User:Srsmith&diff=13062User:Srsmith2012-05-15T16:12:36Z<p>Srsmith: </p>
<hr />
<div>== Music 220C Project Page: Algorithmic Counterpoint Using Markov Models ==<br />
<br />
My project for this quarter is a real time system for algorithmic counterpoint. The system takes an audio input (in this case cello) and generates upper voices in good counterpoint. For the final project I am hoping to generate 2 upper voices above the cello line in real time: one in first species and another in second species. In order to do this, it may be necessary to make some assumptions regarding the input line (i.e. Tonal, Length of line) due to some of the rules relating to how things must end to remain in good counterpoint. While the real time generation of the two lines is the core of the system, I would ideally like to make it into a performance system that could be more interactive and allow the cellist to play something other than the bass line. One possible implementation of this would be to assume that the bass line continues to repeat and then allow the performer to add their own line/embellish the bass line as desired without interfering with the counterpoint engine.<br />
<br />
<br />
== Weekly Updates: ==<br />
05/14/2012 - Week 6.5<br />
<br />
I finally have most of the basic system up and running now. The same bugs in the synthesis still exist, but I have imported the Markov tables and can generate reasonable counterpoint with a few exceptions.<br />
<br />
Counterpoint issues to be implemented/Adjusted Fixed<br />
- Add resolution of tendency tones - this is not included in the current model and definitely needs to be<br />
- Add the cadencing structure - This table has been imported but is just not being used at the moment as other issues are being debugged<br />
- Start up: There's something weird about how I am generating the state for the first pitch. Even the first note has an assigned "previous pitch" which is used to calculate pitch #2. It would be nice to not have this necessarily, and not use information about the preceding melodic table for the first interval (this shouldn't be too hard) <br />
- The solutions at the moment are very deterministic for some reason although I cant figure out why. There are also some cases where i get an error for no possible solutions although there should be some...<br />
<br />
Synthesis/Audio Related Issues<br />
- Now that the Markov system is running, I hope to return my attention to issues of audio output and eventually input. <br />
- I am hoping to have the cello input working in time for the final presentation, but am still not sure whether this will happen or not. The issues of pitch tracking, etc have so far taken a back burner to the calculation issues. Now that things are starting to work more smoothly, this is something I want to try again. <br />
<br />
Goals for Next Week:<br />
For next week, I would like to have an output synthesis of some form that is working and letting me hear these things. Also I hope to incorporate the cadence table and tweak some of the other issues. <br />
<br />
05/04/2012 - Week 5 <br />
<br />
This week was mostly spent working with the C++ code to configure it to use input from the command line as midi pitches to calculate the line in counterpoint. I also spent time working on getting a good output sound for the system, but encountered some bugs along the way. Additionally, I was able to finish populating values into the markov tables that I am using. For next week, I hope to have these bugs fixed and more of the code done to import the Markov tables and use them to choose the next pitch. <br />
<br />
04/25/2012 - Week 4<br />
<br />
First, this week I created a bibliography of the literature that I have been using as a basis for this research. This bibliography will be updated as the project continues. Its current version can be found here (insert link). Also, due to ongoing issues with the real time interface and pitch tracking, I have created a chuck program that allows me to input numbers with the laptop keyboard that correspond to midi pitches and will output both the input and output midi numbers. This structure is the same as the function that will eventually be used for a system with real time audio input, but allows for easier debugging. <br />
<br />
This week, My work was mainly focused on developing a general layout for the Markov tables that will be used. The proposed system will now use a set of 4 different tables. The first one (table 1) implements rules regarding acceptable melodic intervals and conventions for their sequencing. As such, it takes the most recent melodic interval in the generated line and provides a probability for each of the possible following intervals. <br />
<br />
A second table implements limits on the melodic range at any given point, disallowing intervals that will result in a note outside of the melody range. This range is generally limited to Major 9th, however this range narrows towards the end off the phrases, in order to steer the melody towards the region it needs to be at for the cadence. (Table 2)<br />
<br />
Finally the renainging rules will be implemented in a three dimensional table that relates both the current harmonic interval and the upcoming melodic interval in the bass to the possible upcoming interval in the melody. This structure allows for conrol of voice leading rules such as parallel fifths and octaves, as well as preferencing things such as contrary motion and avoidance of certain other cases. The main challenge with this implementation is that this table will be quite large (approx 14 x 17 x 7 = NNN elements). In order to do this, I propose to generate this table somewhat by hand, but using the penalty values in Schottstaedt and converting these to probabilities. For the prohibitions and Major errors (penalty infinite and 200 respectively) a probability of 0 will be used. For Errors with penalty 100 a probability of 0.001 will be used. This should prevent these cases from being used unless they are the only option. For the remaining errors the penalty is converted to a probability by a function of Prob = 1 - Penalty/100. I don't know yet whether this will work well, but it seems like a reasonable starting point for generating this table. <br />
<br />
<br />
<br />
04/18/2012 - Week 2 & 3<br />
<br />
Work this week was divided between two areas. First, I spent some time reworking the code from DuetYourself to only include the functionality that I would like to expand upon for this project (the additional generated tonal line) and removed some of the other functionality from this version. I also tested the pitch tracking with a cello input (it didn't work) and started working on ways to fix this issue. I am hoping that by using a longer input buffer (needed for low pitches) and moving the calculations (currently an autocorrelation) into the frequency domain I can improve the accuracy. If this doesn't work, some additional filtering may be needed to distinguish the fundamental from the harmonics, but hopefully the longer buffer will be sufficient to still do this computation in real time. In order to facilitate testing of this system, I am also working on adding the option of using a wav file as input per Chris' suggestion. This is not fully implemented yet, but is in the works. I also recorded some sample melody lines on the cello that can be used for testing purposes.<br />
<br />
The second portion of work this week was devoted to trying to bring myself up to date on the existing literature regarding algorithmic counterpoint. Somewhat surprisingly, I did not find much at all on methods for doing this in real time. Almost all of the literature approached the problem as harmonizing with a given bass line, where the entire bass line is known in advance. Most of this research involved either a genetic algorithm or a forward working method of generating melody options, combined with some sort of scoring system to rank the possible outputs. In contrast to this area of literature, there has also been some work done on using Markov models to complete the task, which seems more applicable to the real time application that I envision.<br />
<br />
Finally, the overall project design was fleshed out with more specifics regarding what I would like to be able to implement this quarter.<br />
<br />
For next week, I hope to finish the baseline code with working pitch tracking, wav file input, and include the option to have more than one generated counterpoint voice. I would also like to use the time to decide on how I want to implement the counterpoint problem in my system.<br />
<br />
04/07/2012 - Week 1<br />
<br />
This week I decided on the general project area (algorithmic counterpoint) and laid out a general vision for what I would like the final project to look like. I think it should be a real time performance tool that will take audio input (from my cello for the current purposes) that will be used as a base line for the counterpoint. The system will then attempt to generate upper lines (hopefully 2 of them) in good counterpoint with the input.<br />
<br />
In considering the scope of the project, It is likely that I will need to place some constraints on the input bass line (such as restricting it to cadence at the downbeat of every 8 bars, and remain within a single key), though I don't yet know exactly what these will be.<br />
<br />
The system will be based partially on work I did in the fall quarter for music 256a developing a system that could generate a consonant counterpoint line for a given input, although it was not strict counterpoint, and did not generally result in a particularly musical output. For more information see [https://ccrma.stanford.edu/~srsmith/256a/DuetYourself DuetYourself].<br />
<br />
For next week, I plan to work on revising the relevant parts of DuetYourself to create a baseline set of code for this project. In particular, I want to add the ability to read in audio from a file instead of the live microphone input, for use in development. Related to this, I also want to record a few selections of cello playing that can be used as test inputs to the system. Finally, I want to look more into the existing work in this area (of which there seems to be a lot) to figure out how to go about the counterpoint generation.</div>Srsmithhttps://ccrma.stanford.edu/mediawiki/index.php?title=User:Srsmith&diff=13061User:Srsmith2012-05-15T15:54:04Z<p>Srsmith: /* Weekly Updates: */</p>
<hr />
<div>== Music 220C Project Page: Algorithmic Counterpoint Using Markov Models ==<br />
<br />
My project for this quarter is a real time system for algorithmic counterpoint. The system takes an audio input (in this case cello) and generates upper voices in good counterpoint. For the final project I am hoping to generate 2 upper voices above the cello line in real time: one in first species and another in second species. In order to do this, it may be necessary to make some assumptions regarding the input line (i.e. Tonal, Length of line) due to some of the rules relating to how things must end to remain in good counterpoint. While the real time generation of the two lines is the core of the system, I would ideally like to make it into a performance system that could be more interactive and allow the cellist to play something other than the bass line. One possible implementation of this would be to assume that the bass line continues to repeat and then allow the performer to add their own line/embellish the bass line as desired without interfering with the counterpoint engine.<br />
<br />
<br />
== Weekly Updates: ==<br />
05/14/2012 - Week 6.5<br />
<br />
I finally have most of the basic system up and running now. The same bugs in the synthesis still exist, but I have imported the Markov tables and can generate reasonable counterpoint with a few exceptions.<br />
<br />
Counterpoint issues to be implemented/Adjusted Fixed<br />
- Add resolution of tendency tones - this is not included in the current model and definitely needs to be<br />
- Add the cadencing structure - This table has been imported but is just not being used at the moment as other issues are being debugged<br />
- Start up: There's something weird about how I am generating the state for the first pitch. Even the first note has an assigned "previous pitch" which is used to calculate pitch #2. It would be nice to not have this necessarily, and not use information about the preceding melodic table for the first interval (this shouldn't be too hard) <br />
- At the moment the solutions are fairly deterministic. This has to do with how the random numebr generator is seeded and I'm working on it. <br />
<br />
Synthesis/Audio Related Issues<br />
- Now that the Markov system is running, I hope to return my attention to issues of audio output and eventually input. <br />
- I am hoping to have the cello input working in time for the final presentation, but am still not sure whether this will happen or not. The issues of pitch tracking, etc have so far taken a back burner to the calculation issues. Now that things are starting to work more smoothly, this is something I want to try again. <br />
<br />
Goals for Next Week:<br />
For next week, I would like to have an output synthesis of some form that is working and letting me hear these things. Also I hope to incorporate the cadence table and tweak some of the other issues. <br />
<br />
05/04/2012 - Week 5 <br />
<br />
This week was mostly spent working with the C++ code to configure it to use input from the command line as midi pitches to calculate the line in counterpoint. I also spent time working on getting a good output sound for the system, but encountered some bugs along the way. Additionally, I was able to finish populating values into the markov tables that I am using. For next week, I hope to have these bugs fixed and more of the code done to import the Markov tables and use them to choose the next pitch. <br />
<br />
04/25/2012 - Week 4<br />
<br />
First, this week I created a bibliography of the literature that I have been using as a basis for this research. This bibliography will be updated as the project continues. Its current version can be found here (insert link). Also, due to ongoing issues with the real time interface and pitch tracking, I have created a chuck program that allows me to input numbers with the laptop keyboard that correspond to midi pitches and will output both the input and output midi numbers. This structure is the same as the function that will eventually be used for a system with real time audio input, but allows for easier debugging. <br />
<br />
This week, My work was mainly focused on developing a general layout for the Markov tables that will be used. The proposed system will now use a set of 4 different tables. The first one (table 1) implements rules regarding acceptable melodic intervals and conventions for their sequencing. As such, it takes the most recent melodic interval in the generated line and provides a probability for each of the possible following intervals. <br />
<br />
A second table implements limits on the melodic range at any given point, disallowing intervals that will result in a note outside of the melody range. This range is generally limited to Major 9th, however this range narrows towards the end off the phrases, in order to steer the melody towards the region it needs to be at for the cadence. (Table 2)<br />
<br />
Finally the renainging rules will be implemented in a three dimensional table that relates both the current harmonic interval and the upcoming melodic interval in the bass to the possible upcoming interval in the melody. This structure allows for conrol of voice leading rules such as parallel fifths and octaves, as well as preferencing things such as contrary motion and avoidance of certain other cases. The main challenge with this implementation is that this table will be quite large (approx 14 x 17 x 7 = NNN elements). In order to do this, I propose to generate this table somewhat by hand, but using the penalty values in Schottstaedt and converting these to probabilities. For the prohibitions and Major errors (penalty infinite and 200 respectively) a probability of 0 will be used. For Errors with penalty 100 a probability of 0.001 will be used. This should prevent these cases from being used unless they are the only option. For the remaining errors the penalty is converted to a probability by a function of Prob = 1 - Penalty/100. I don't know yet whether this will work well, but it seems like a reasonable starting point for generating this table. <br />
<br />
<br />
<br />
04/18/2012 - Week 2 & 3<br />
<br />
Work this week was divided between two areas. First, I spent some time reworking the code from DuetYourself to only include the functionality that I would like to expand upon for this project (the additional generated tonal line) and removed some of the other functionality from this version. I also tested the pitch tracking with a cello input (it didn't work) and started working on ways to fix this issue. I am hoping that by using a longer input buffer (needed for low pitches) and moving the calculations (currently an autocorrelation) into the frequency domain I can improve the accuracy. If this doesn't work, some additional filtering may be needed to distinguish the fundamental from the harmonics, but hopefully the longer buffer will be sufficient to still do this computation in real time. In order to facilitate testing of this system, I am also working on adding the option of using a wav file as input per Chris' suggestion. This is not fully implemented yet, but is in the works. I also recorded some sample melody lines on the cello that can be used for testing purposes.<br />
<br />
The second portion of work this week was devoted to trying to bring myself up to date on the existing literature regarding algorithmic counterpoint. Somewhat surprisingly, I did not find much at all on methods for doing this in real time. Almost all of the literature approached the problem as harmonizing with a given bass line, where the entire bass line is known in advance. Most of this research involved either a genetic algorithm or a forward working method of generating melody options, combined with some sort of scoring system to rank the possible outputs. In contrast to this area of literature, there has also been some work done on using Markov models to complete the task, which seems more applicable to the real time application that I envision.<br />
<br />
Finally, the overall project design was fleshed out with more specifics regarding what I would like to be able to implement this quarter.<br />
<br />
For next week, I hope to finish the baseline code with working pitch tracking, wav file input, and include the option to have more than one generated counterpoint voice. I would also like to use the time to decide on how I want to implement the counterpoint problem in my system.<br />
<br />
04/07/2012 - Week 1<br />
<br />
This week I decided on the general project area (algorithmic counterpoint) and laid out a general vision for what I would like the final project to look like. I think it should be a real time performance tool that will take audio input (from my cello for the current purposes) that will be used as a base line for the counterpoint. The system will then attempt to generate upper lines (hopefully 2 of them) in good counterpoint with the input.<br />
<br />
In considering the scope of the project, It is likely that I will need to place some constraints on the input bass line (such as restricting it to cadence at the downbeat of every 8 bars, and remain within a single key), though I don't yet know exactly what these will be.<br />
<br />
The system will be based partially on work I did in the fall quarter for music 256a developing a system that could generate a consonant counterpoint line for a given input, although it was not strict counterpoint, and did not generally result in a particularly musical output. For more information see [https://ccrma.stanford.edu/~srsmith/256a/DuetYourself DuetYourself].<br />
<br />
For next week, I plan to work on revising the relevant parts of DuetYourself to create a baseline set of code for this project. In particular, I want to add the ability to read in audio from a file instead of the live microphone input, for use in development. Related to this, I also want to record a few selections of cello playing that can be used as test inputs to the system. Finally, I want to look more into the existing work in this area (of which there seems to be a lot) to figure out how to go about the counterpoint generation.</div>Srsmithhttps://ccrma.stanford.edu/mediawiki/index.php?title=User:Srsmith&diff=13060User:Srsmith2012-05-15T09:54:51Z<p>Srsmith: /* Weekly Updates: */</p>
<hr />
<div>== Music 220C Project Page: Algorithmic Counterpoint Using Markov Models ==<br />
<br />
My project for this quarter is a real time system for algorithmic counterpoint. The system takes an audio input (in this case cello) and generates upper voices in good counterpoint. For the final project I am hoping to generate 2 upper voices above the cello line in real time: one in first species and another in second species. In order to do this, it may be necessary to make some assumptions regarding the input line (i.e. Tonal, Length of line) due to some of the rules relating to how things must end to remain in good counterpoint. While the real time generation of the two lines is the core of the system, I would ideally like to make it into a performance system that could be more interactive and allow the cellist to play something other than the bass line. One possible implementation of this would be to assume that the bass line continues to repeat and then allow the performer to add their own line/embellish the bass line as desired without interfering with the counterpoint engine.<br />
<br />
<br />
== Weekly Updates: ==<br />
05/14/2012 - Week 6.5<br />
<br />
I finally have most of the basic system up and running now. The same bugs in the synthesis still exist, but I have imported the Markov tables and can generate reasonable counterpoint with a few exceptions.<br />
<br />
Counterpoint issues to be implemented/Adjusted Fixed<br />
- Add resolution of tendency tones - this is not included in the current model and definitely needs to be<br />
- Add the cadencing structure - This table has been imported but is just not being used at the moment as other issues are being debugged<br />
- Start up: There's something weird about how I am generating the state for the first pitch. Even the first note has an assigned "previous pitch" which is used to calculate pitch #2. It would be nice to not have this necessarily, and not use information about the preceding melodic table for the first interval (this shouldn't be too hard) <br />
- The solutions at the moment are very deterministic for some reason although I cant figure out why. There are also some cases where i get an error for no possible solutions although there should be some...<br />
<br />
Synthesis/Audio Related Issues<br />
- Now that the Markov system is running, I hope to return my attention to issues of audio output and eventually input. <br />
- I am hoping to have the cello input working in time for the final presentation, but am still not sure whether this will happen or not. The issues of pitch tracking, etc have so far taken a back burner to the calculation issues. Now that things are starting to work more smoothly, this is something I want to try again. <br />
<br />
Goals for Next Week:<br />
For next week, I would like to have an output synthesis of some form that is working and letting me hear these things. Also I hope to incorporate the cadence table and tweak some of the other issues. <br />
<br />
05/04/2012 - Week 5 <br />
<br />
This week was mostly spent working with the C++ code to configure it to use input from the command line as midi pitches to calculate the line in counterpoint. I also spent time working on getting a good output sound for the system, but encountered some bugs along the way. Additionally, I was able to finish populating values into the markov tables that I am using. For next week, I hope to have these bugs fixed and more of the code done to import the Markov tables and use them to choose the next pitch. <br />
<br />
04/25/2012 - Week 4<br />
<br />
First, this week I created a bibliography of the literature that I have been using as a basis for this research. This bibliography will be updated as the project continues. Its current version can be found here (insert link). Also, due to ongoing issues with the real time interface and pitch tracking, I have created a chuck program that allows me to input numbers with the laptop keyboard that correspond to midi pitches and will output both the input and output midi numbers. This structure is the same as the function that will eventually be used for a system with real time audio input, but allows for easier debugging. <br />
<br />
This week, My work was mainly focused on developing a general layout for the Markov tables that will be used. The proposed system will now use a set of 4 different tables. The first one (table 1) implements rules regarding acceptable melodic intervals and conventions for their sequencing. As such, it takes the most recent melodic interval in the generated line and provides a probability for each of the possible following intervals. <br />
<br />
A second table implements limits on the melodic range at any given point, disallowing intervals that will result in a note outside of the melody range. This range is generally limited to Major 9th, however this range narrows towards the end off the phrases, in order to steer the melody towards the region it needs to be at for the cadence. (Table 2)<br />
<br />
Finally the renainging rules will be implemented in a three dimensional table that relates both the current harmonic interval and the upcoming melodic interval in the bass to the possible upcoming interval in the melody. This structure allows for conrol of voice leading rules such as parallel fifths and octaves, as well as preferencing things such as contrary motion and avoidance of certain other cases. The main challenge with this implementation is that this table will be quite large (approx 14 x 17 x 7 = NNN elements). In order to do this, I propose to generate this table somewhat by hand, but using the penalty values in Schottstaedt and converting these to probabilities. For the prohibitions and Major errors (penalty infinite and 200 respectively) a probability of 0 will be used. For Errors with penalty 100 a probability of 0.001 will be used. This should prevent these cases from being used unless they are the only option. For the remaining errors the penalty is converted to a probability by a function of Prob = 1 - Penalty/100. I don't know yet whether this will work well, but it seems like a reasonable starting point for generating this table. <br />
<br />
<br />
<br />
04/18/2012 - Week 2 & 3<br />
<br />
Work this week was divided between two areas. First, I spent some time reworking the code from DuetYourself to only include the functionality that I would like to expand upon for this project (the additional generated tonal line) and removed some of the other functionality from this version. I also tested the pitch tracking with a cello input (it didn't work) and started working on ways to fix this issue. I am hoping that by using a longer input buffer (needed for low pitches) and moving the calculations (currently an autocorrelation) into the frequency domain I can improve the accuracy. If this doesn't work, some additional filtering may be needed to distinguish the fundamental from the harmonics, but hopefully the longer buffer will be sufficient to still do this computation in real time. In order to facilitate testing of this system, I am also working on adding the option of using a wav file as input per Chris' suggestion. This is not fully implemented yet, but is in the works. I also recorded some sample melody lines on the cello that can be used for testing purposes.<br />
<br />
The second portion of work this week was devoted to trying to bring myself up to date on the existing literature regarding algorithmic counterpoint. Somewhat surprisingly, I did not find much at all on methods for doing this in real time. Almost all of the literature approached the problem as harmonizing with a given bass line, where the entire bass line is known in advance. Most of this research involved either a genetic algorithm or a forward working method of generating melody options, combined with some sort of scoring system to rank the possible outputs. In contrast to this area of literature, there has also been some work done on using Markov models to complete the task, which seems more applicable to the real time application that I envision.<br />
<br />
Finally, the overall project design was fleshed out with more specifics regarding what I would like to be able to implement this quarter.<br />
<br />
For next week, I hope to finish the baseline code with working pitch tracking, wav file input, and include the option to have more than one generated counterpoint voice. I would also like to use the time to decide on how I want to implement the counterpoint problem in my system.<br />
<br />
04/07/2012 - Week 1<br />
<br />
This week I decided on the general project area (algorithmic counterpoint) and laid out a general vision for what I would like the final project to look like. I think it should be a real time performance tool that will take audio input (from my cello for the current purposes) that will be used as a base line for the counterpoint. The system will then attempt to generate upper lines (hopefully 2 of them) in good counterpoint with the input.<br />
<br />
In considering the scope of the project, It is likely that I will need to place some constraints on the input bass line (such as restricting it to cadence at the downbeat of every 8 bars, and remain within a single key), though I don't yet know exactly what these will be.<br />
<br />
The system will be based partially on work I did in the fall quarter for music 256a developing a system that could generate a consonant counterpoint line for a given input, although it was not strict counterpoint, and did not generally result in a particularly musical output. For more information see [https://ccrma.stanford.edu/~srsmith/256a/DuetYourself DuetYourself].<br />
<br />
For next week, I plan to work on revising the relevant parts of DuetYourself to create a baseline set of code for this project. In particular, I want to add the ability to read in audio from a file instead of the live microphone input, for use in development. Related to this, I also want to record a few selections of cello playing that can be used as test inputs to the system. Finally, I want to look more into the existing work in this area (of which there seems to be a lot) to figure out how to go about the counterpoint generation.</div>Srsmithhttps://ccrma.stanford.edu/mediawiki/index.php?title=220c-spring-2012&diff=13048220c-spring-20122012-05-07T15:45:11Z<p>Srsmith: /* Use the Space Below to Link to Your Project Pages/Wikis */</p>
<hr />
<div>[[Category:Courses]]<br />
<br />
= [https://ccrma.stanford.edu/courses/220c/ <b>Music 220c</b>] - Research Seminar in Computer-Generated Music =<br />
<br />
<br />
==Use the Space Below to Link to Your Project Pages/Wikis==<br />
Short blurbs and links to project pages:<br />
<br />
(Edit this page and change the link from https://ccrma.stanford.edu/ to your course website.)<br />
<br />
*[https://ccrma.stanford.edu/ Alex]<br />
*[https://ccrma.stanford.edu/ Chris]<br />
*[https://ccrma.stanford.edu/wiki/User:Cecilia Cecilia]<br />
*[https://ccrma.stanford.edu/ Colin]<br />
*[https://ccrma.stanford.edu/~dt/220c/ Derek]<br />
*[https://ccrma.stanford.edu/ Eri]<br />
*[https://ccrma.stanford.edu/wiki/User:Francois/Solar_Genesis_II Francois]<br />
*[https://ccrma.stanford.edu/~jhsu/220c/ Jen]<br />
*[https://ccrma.stanford.edu/wiki/User:Jiffer8/220c Jiffer]<br />
*[https://ccrma.stanford.edu/wiki/User:Jrowell/220C Jeff]<br />
*[https://ccrma.stanford.edu/wiki/User:Kwerner/220C Kurt]<br />
*[https://ccrma.stanford.edu/ Locky]<br />
*[https://ccrma.stanford.edu/~lzodda/220c/fp/ Lydia]<br />
*[http://apolloforteens.tumblr.com/ Ricky]<br />
*[https://ccrma.stanford.edu/wiki/User:Srsmith Sarah]<br />
<br />
<br />
----<br />
Email [mailto:cc@ccrma.stanford.edu Chris] ~ <br />
Email [mailto:spencer@ccrma.stanford.edu Spencer]</div>Srsmithhttps://ccrma.stanford.edu/mediawiki/index.php?title=User:Srsmith&diff=13047User:Srsmith2012-05-07T15:25:05Z<p>Srsmith: /* Weekly Updates: */</p>
<hr />
<div>== Music 220C Project Page: Algorithmic Counterpoint Using Markov Models ==<br />
<br />
My project for this quarter is a real time system for algorithmic counterpoint. The system takes an audio input (in this case cello) and generates upper voices in good counterpoint. For the final project I am hoping to generate 2 upper voices above the cello line in real time: one in first species and another in second species. In order to do this, it may be necessary to make some assumptions regarding the input line (i.e. Tonal, Length of line) due to some of the rules relating to how things must end to remain in good counterpoint. While the real time generation of the two lines is the core of the system, I would ideally like to make it into a performance system that could be more interactive and allow the cellist to play something other than the bass line. One possible implementation of this would be to assume that the bass line continues to repeat and then allow the performer to add their own line/embellish the bass line as desired without interfering with the counterpoint engine.<br />
<br />
<br />
== Weekly Updates: ==<br />
<br />
05/04/2012 - Week 5 <br />
<br />
This week was mostly spent working with the C++ code to configure it to use input from the command line as midi pitches to calculate the line in counterpoint. I also spent time working on getting a good output sound for the system, but encountered some bugs along the way. Additionally, I was able to finish populating values into the markov tables that I am using. For next week, I hope to have these bugs fixed and more of the code done to import the Markov tables and use them to choose the next pitch. <br />
<br />
04/25/2012 - Week 4<br />
<br />
First, this week I created a bibliography of the literature that I have been using as a basis for this research. This bibliography will be updated as the project continues. Its current version can be found here (insert link). Also, due to ongoing issues with the real time interface and pitch tracking, I have created a chuck program that allows me to input numbers with the laptop keyboard that correspond to midi pitches and will output both the input and output midi numbers. This structure is the same as the function that will eventually be used for a system with real time audio input, but allows for easier debugging. <br />
<br />
This week, My work was mainly focused on developing a general layout for the Markov tables that will be used. The proposed system will now use a set of 4 different tables. The first one (table 1) implements rules regarding acceptable melodic intervals and conventions for their sequencing. As such, it takes the most recent melodic interval in the generated line and provides a probability for each of the possible following intervals. <br />
<br />
A second table implements limits on the melodic range at any given point, disallowing intervals that will result in a note outside of the melody range. This range is generally limited to Major 9th, however this range narrows towards the end off the phrases, in order to steer the melody towards the region it needs to be at for the cadence. (Table 2)<br />
<br />
Finally the renainging rules will be implemented in a three dimensional table that relates both the current harmonic interval and the upcoming melodic interval in the bass to the possible upcoming interval in the melody. This structure allows for conrol of voice leading rules such as parallel fifths and octaves, as well as preferencing things such as contrary motion and avoidance of certain other cases. The main challenge with this implementation is that this table will be quite large (approx 14 x 17 x 7 = NNN elements). In order to do this, I propose to generate this table somewhat by hand, but using the penalty values in Schottstaedt and converting these to probabilities. For the prohibitions and Major errors (penalty infinite and 200 respectively) a probability of 0 will be used. For Errors with penalty 100 a probability of 0.001 will be used. This should prevent these cases from being used unless they are the only option. For the remaining errors the penalty is converted to a probability by a function of Prob = 1 - Penalty/100. I don't know yet whether this will work well, but it seems like a reasonable starting point for generating this table. <br />
<br />
<br />
<br />
04/18/2012 - Week 2 & 3<br />
<br />
Work this week was divided between two areas. First, I spent some time reworking the code from DuetYourself to only include the functionality that I would like to expand upon for this project (the additional generated tonal line) and removed some of the other functionality from this version. I also tested the pitch tracking with a cello input (it didn't work) and started working on ways to fix this issue. I am hoping that by using a longer input buffer (needed for low pitches) and moving the calculations (currently an autocorrelation) into the frequency domain I can improve the accuracy. If this doesn't work, some additional filtering may be needed to distinguish the fundamental from the harmonics, but hopefully the longer buffer will be sufficient to still do this computation in real time. In order to facilitate testing of this system, I am also working on adding the option of using a wav file as input per Chris' suggestion. This is not fully implemented yet, but is in the works. I also recorded some sample melody lines on the cello that can be used for testing purposes.<br />
<br />
The second portion of work this week was devoted to trying to bring myself up to date on the existing literature regarding algorithmic counterpoint. Somewhat surprisingly, I did not find much at all on methods for doing this in real time. Almost all of the literature approached the problem as harmonizing with a given bass line, where the entire bass line is known in advance. Most of this research involved either a genetic algorithm or a forward working method of generating melody options, combined with some sort of scoring system to rank the possible outputs. In contrast to this area of literature, there has also been some work done on using Markov models to complete the task, which seems more applicable to the real time application that I envision.<br />
<br />
Finally, the overall project design was fleshed out with more specifics regarding what I would like to be able to implement this quarter.<br />
<br />
For next week, I hope to finish the baseline code with working pitch tracking, wav file input, and include the option to have more than one generated counterpoint voice. I would also like to use the time to decide on how I want to implement the counterpoint problem in my system.<br />
<br />
04/07/2012 - Week 1<br />
<br />
This week I decided on the general project area (algorithmic counterpoint) and laid out a general vision for what I would like the final project to look like. I think it should be a real time performance tool that will take audio input (from my cello for the current purposes) that will be used as a base line for the counterpoint. The system will then attempt to generate upper lines (hopefully 2 of them) in good counterpoint with the input.<br />
<br />
In considering the scope of the project, It is likely that I will need to place some constraints on the input bass line (such as restricting it to cadence at the downbeat of every 8 bars, and remain within a single key), though I don't yet know exactly what these will be.<br />
<br />
The system will be based partially on work I did in the fall quarter for music 256a developing a system that could generate a consonant counterpoint line for a given input, although it was not strict counterpoint, and did not generally result in a particularly musical output. For more information see [https://ccrma.stanford.edu/~srsmith/256a/DuetYourself DuetYourself].<br />
<br />
For next week, I plan to work on revising the relevant parts of DuetYourself to create a baseline set of code for this project. In particular, I want to add the ability to read in audio from a file instead of the live microphone input, for use in development. Related to this, I also want to record a few selections of cello playing that can be used as test inputs to the system. Finally, I want to look more into the existing work in this area (of which there seems to be a lot) to figure out how to go about the counterpoint generation.</div>Srsmithhttps://ccrma.stanford.edu/mediawiki/index.php?title=User:Srsmith&diff=12995User:Srsmith2012-04-25T23:54:49Z<p>Srsmith: </p>
<hr />
<div>== Music 220C Project Page: Algorithmic Counterpoint Using Markov Models ==<br />
<br />
My project for this quarter is a real time system for algorithmic counterpoint. The system takes an audio input (in this case cello) and generates upper voices in good counterpoint. For the final project I am hoping to generate 2 upper voices above the cello line in real time: one in first species and another in second species. In order to do this, it may be necessary to make some assumptions regarding the input line (i.e. Tonal, Length of line) due to some of the rules relating to how things must end to remain in good counterpoint. While the real time generation of the two lines is the core of the system, I would ideally like to make it into a performance system that could be more interactive and allow the cellist to play something other than the bass line. One possible implementation of this would be to assume that the bass line continues to repeat and then allow the performer to add their own line/embellish the bass line as desired without interfering with the counterpoint engine.<br />
<br />
<br />
== Weekly Updates: ==<br />
<br />
04/25/2012 - Week 4 Draft<br />
<br />
First, this week I created a bibliography of the literature that I have been using as a basis for this research. This bibliography will be updated as the project continues. Its current version can be found here (insert link). Also, due to ongoing issues with the real time interface and pitch tracking, I have created a chuck program that allows me to input numbers with the laptop keyboard that correspond to midi pitches and will output both the input and output midi numbers. This structure is the same as the function that will eventually be used for a system with real time audio input, but allows for easier debugging. <br />
<br />
This week, My work was mainly focused on developing a general layout for the Markov tables that will be used. The proposed system will now use a set of 4 different tables. The first one (table 1) implements rules regarding acceptable melodic intervals and conventions for their sequencing. As such, it takes the most recent melodic interval in the generated line and provides a probability for each of the possible following intervals. <br />
<br />
A second table implements limits on the melodic range at any given point, disallowing intervals that will result in a note outside of the melody range. This range is generally limited to Major 9th, however this range narrows towards the end off the phrases, in order to steer the melody towards the region it needs to be at for the cadence. (Table 2)<br />
<br />
Finally the renainging rules will be implemented in a three dimensional table that relates both the current harmonic interval and the upcoming melodic interval in the bass to the possible upcoming interval in the melody. This structure allows for conrol of voice leading rules such as parallel fifths and octaves, as well as preferencing things such as contrary motion and avoidance of certain other cases. The main challenge with this implementation is that this table will be quite large (approx 14 x 17 x 7 = NNN elements). In order to do this, I propose to generate this table somewhat by hand, but using the penalty values in Schottstaedt and converting these to probabilities. For the prohibitions and Major errors (penalty infinite and 200 respectively) a probability of 0 will be used. For Errors with penalty 100 a probability of 0.001 will be used. This should prevent these cases from being used unless they are the only option. For the remaining errors the penalty is converted to a probability by a function of Prob = 1 - Penalty/100. I don't know yet whether this will work well, but it seems like a reasonable starting point for generating this table. <br />
<br />
<br />
<br />
04/18/2012 - Week 2 & 3<br />
<br />
Work this week was divided between two areas. First, I spent some time reworking the code from DuetYourself to only include the functionality that I would like to expand upon for this project (the additional generated tonal line) and removed some of the other functionality from this version. I also tested the pitch tracking with a cello input (it didn't work) and started working on ways to fix this issue. I am hoping that by using a longer input buffer (needed for low pitches) and moving the calculations (currently an autocorrelation) into the frequency domain I can improve the accuracy. If this doesn't work, some additional filtering may be needed to distinguish the fundamental from the harmonics, but hopefully the longer buffer will be sufficient to still do this computation in real time. In order to facilitate testing of this system, I am also working on adding the option of using a wav file as input per Chris' suggestion. This is not fully implemented yet, but is in the works. I also recorded some sample melody lines on the cello that can be used for testing purposes.<br />
<br />
The second portion of work this week was devoted to trying to bring myself up to date on the existing literature regarding algorithmic counterpoint. Somewhat surprisingly, I did not find much at all on methods for doing this in real time. Almost all of the literature approached the problem as harmonizing with a given bass line, where the entire bass line is known in advance. Most of this research involved either a genetic algorithm or a forward working method of generating melody options, combined with some sort of scoring system to rank the possible outputs. In contrast to this area of literature, there has also been some work done on using Markov models to complete the task, which seems more applicable to the real time application that I envision.<br />
<br />
Finally, the overall project design was fleshed out with more specifics regarding what I would like to be able to implement this quarter.<br />
<br />
For next week, I hope to finish the baseline code with working pitch tracking, wav file input, and include the option to have more than one generated counterpoint voice. I would also like to use the time to decide on how I want to implement the counterpoint problem in my system.<br />
<br />
04/07/2012 - Week 1<br />
<br />
This week I decided on the general project area (algorithmic counterpoint) and laid out a general vision for what I would like the final project to look like. I think it should be a real time performance tool that will take audio input (from my cello for the current purposes) that will be used as a base line for the counterpoint. The system will then attempt to generate upper lines (hopefully 2 of them) in good counterpoint with the input.<br />
<br />
In considering the scope of the project, It is likely that I will need to place some constraints on the input bass line (such as restricting it to cadence at the downbeat of every 8 bars, and remain within a single key), though I don't yet know exactly what these will be.<br />
<br />
The system will be based partially on work I did in the fall quarter for music 256a developing a system that could generate a consonant counterpoint line for a given input, although it was not strict counterpoint, and did not generally result in a particularly musical output. For more information see [https://ccrma.stanford.edu/~srsmith/256a/DuetYourself DuetYourself].<br />
<br />
For next week, I plan to work on revising the relevant parts of DuetYourself to create a baseline set of code for this project. In particular, I want to add the ability to read in audio from a file instead of the live microphone input, for use in development. Related to this, I also want to record a few selections of cello playing that can be used as test inputs to the system. Finally, I want to look more into the existing work in this area (of which there seems to be a lot) to figure out how to go about the counterpoint generation.</div>Srsmithhttps://ccrma.stanford.edu/mediawiki/index.php?title=User:Srsmith&diff=12993User:Srsmith2012-04-24T21:28:12Z<p>Srsmith: Created page with ' == Music 220C Project Page: Algorithmic Counterpoint Using Markov Models == My project for this quarter is a real time system for algorithmic counterpoint. The system takes an …'</p>
<hr />
<div><br />
== Music 220C Project Page: Algorithmic Counterpoint Using Markov Models ==<br />
<br />
My project for this quarter is a real time system for algorithmic counterpoint. The system takes an audio input (in this case cello) and generates upper voices in good counterpoint. For the final project I am hoping to generate 2 upper voices above the cello line in real time: one in first species and another in second species. In order to do this, it may be necessary to make some assumptions regarding the input line (i.e. Tonal, Length of line) due to some of the rules relating to how things must end to remain in good counterpoint. While the real time generation of the two lines is the core of the system, I would ideally like to make it into a performance system that could be more interactive and allow the cellist to play something other than the bass line. One possible implementation of this would be to assume that the bass line continues to repeat and then allow the performer to add their own line/embellish the bass line as desired without interfering with the counterpoint engine.<br />
<br />
<br />
== Weekly Updates: ==<br />
<br />
04/18/2012 - Week 2 & 3<br />
<br />
Work this week was divided between two areas. First, I spent some time reworking the code from DuetYourself to only include the functionality that I would like to expand upon for this project (the additional generated tonal line) and removed some of the other functionality from this version. I also tested the pitch tracking with a cello input (it didn't work) and started working on ways to fix this issue. I am hoping that by using a longer input buffer (needed for low pitches) and moving the calculations (currently an autocorrelation) into the frequency domain I can improve the accuracy. If this doesn't work, some additional filtering may be needed to distinguish the fundamental from the harmonics, but hopefully the longer buffer will be sufficient to still do this computation in real time. In order to facilitate testing of this system, I am also working on adding the option of using a wav file as input per Chris' suggestion. This is not fully implemented yet, but is in the works. I also recorded some sample melody lines on the cello that can be used for testing purposes.<br />
<br />
The second portion of work this week was devoted to trying to bring myself up to date on the existing literature regarding algorithmic counterpoint. Somewhat surprisingly, I did not find much at all on methods for doing this in real time. Almost all of the literature approached the problem as harmonizing with a given bass line, where the entire bass line is known in advance. Most of this research involved either a genetic algorithm or a forward working method of generating melody options, combined with some sort of scoring system to rank the possible outputs. In contrast to this area of literature, there has also been some work done on using Markov models to complete the task, which seems more applicable to the real time application that I envision.<br />
<br />
Finally, the overall project design was fleshed out with more specifics regarding what I would like to be able to implement this quarter.<br />
<br />
For next week, I hope to finish the baseline code with working pitch tracking, wav file input, and include the option to have more than one generated counterpoint voice. I would also like to use the time to decide on how I want to implement the counterpoint problem in my system.<br />
<br />
04/07/2012 - Week 1<br />
<br />
This week I decided on the general project area (algorithmic counterpoint) and laid out a general vision for what I would like the final project to look like. I think it should be a real time performance tool that will take audio input (from my cello for the current purposes) that will be used as a base line for the counterpoint. The system will then attempt to generate upper lines (hopefully 2 of them) in good counterpoint with the input.<br />
<br />
In considering the scope of the project, It is likely that I will need to place some constraints on the input bass line (such as restricting it to cadence at the downbeat of every 8 bars, and remain within a single key), though I don't yet know exactly what these will be.<br />
<br />
The system will be based partially on work I did in the fall quarter for music 256a developing a system that could generate a consonant counterpoint line for a given input, although it was not strict counterpoint, and did not generally result in a particularly musical output. For more information see [https://ccrma.stanford.edu/~srsmith/256a/DuetYourself DuetYourself].<br />
<br />
For next week, I plan to work on revising the relevant parts of DuetYourself to create a baseline set of code for this project. In particular, I want to add the ability to read in audio from a file instead of the live microphone input, for use in development. Related to this, I also want to record a few selections of cello playing that can be used as test inputs to the system. Finally, I want to look more into the existing work in this area (of which there seems to be a lot) to figure out how to go about the counterpoint generation.</div>Srsmithhttps://ccrma.stanford.edu/mediawiki/index.php?title=220c-spring-2012&diff=12903220c-spring-20122012-04-10T22:01:43Z<p>Srsmith: </p>
<hr />
<div>[[Category:Courses]]<br />
<br />
= [https://ccrma.stanford.edu/courses/220c/ <b>Music 220c</b>] - Research Seminar in Computer-Generated Music =<br />
<br />
<br />
==Use the Space Below to Link to Your Project Pages/Wikis==<br />
Short blurbs and links to project pages:<br />
<br />
(Edit this page and change the link from https://ccrma.stanford.edu/ to your course website.)<br />
<br />
*[https://ccrma.stanford.edu/ Alex]<br />
*[https://ccrma.stanford.edu/ Chris]<br />
*[https://ccrma.stanford.edu/ Cecilia]<br />
*[https://ccrma.stanford.edu/ Colin]<br />
*[https://ccrma.stanford.edu/~dt/220c/ Derek]<br />
*[https://ccrma.stanford.edu/ Eri]<br />
*[https://ccrma.stanford.edu/wiki/User:Francois/Solar_Genesis_II Francois]<br />
*[https://ccrma.stanford.edu/ Jen]<br />
*[https://ccrma.stanford.edu/ Jiffer]<br />
*[https://ccrma.stanford.edu/ Jeff]<br />
*[https://ccrma.stanford.edu/ Kurt]<br />
*[https://ccrma.stanford.edu/ Locky]<br />
*[https://ccrma.stanford.edu/ Lydia]<br />
*[https://apolloforteens.tumblr.com/ Ricky]<br />
*[https://ccrma.stanford.edu/~srsmith/220c/ Sarah]<br />
<br />
<br />
----<br />
Email [mailto:cc@ccrma.stanford.edu Chris] ~ <br />
Email [mailto:spencer@ccrma.stanford.edu Spencer]</div>Srsmithhttps://ccrma.stanford.edu/mediawiki/index.php?title=220b-winter-2012/final-rehearsal-schedule&diff=12861220b-winter-2012/final-rehearsal-schedule2012-03-15T19:25:39Z<p>Srsmith: </p>
<hr />
<div>= Final Project Presentation Rehearsal Schedule =<br />
<br />
Location: CCRMA Stage, 3rd floor, The Knoll<br />
<br />
Date/Time: Monday, March 19, 10am-3pm<br />
<br />
== Time Slots ==<br />
<br />
Please indicate your name, number of audio channels, whether or not you need video, and any other technical needs. <br />
<br />
* 9:00 - (name, number of channels, video/no video)<br />
* 9:20 - (name, number of channels, video/no video)<br />
* 9:40 - (Cecilia, 2 or 8, video/no video(not sure yet))<br />
* 10:00 - (name, number of channels, video/no video)<br />
* 10:20 - Sarah, 4 , no video, mic input.<br />
* 10:40 - Kevin, 2, video<br />
* 10:00 - Danny, 2, video<br />
* 10:20 - Kurt, 2 ch, video<br />
* 10:40 - Francois, 8, video, DK<br />
* 11:00 - Micah, 2, video<br />
* 11:20 - Jennifer, 2, no video (for now..)<br />
* 11:40 - Lydia, 2, video<br />
* 12:00 - JP, 2, No video<br />
* 12:20 - Jeff, 2, no video, mic for input<br />
* 12:40 - Evan, 2, possibly video<br />
* 1:00 - Chris, Listening Room, no video<br />
* 1:20 - Jiffer, 2-chan, no vid<br />
* 1:40 - Eri, 2, no vid, mic<br />
* 2:00 - (Colin Sullivan, 2, no video), Max Praglin (2, with mic, no video)<br />
* 2:20 - Alex Stabile, 2, video<br />
* 2:40 - Timothy Wong, 2, no video</div>Srsmithhttps://ccrma.stanford.edu/mediawiki/index.php?title=220b-winter-2012/final-rehearsal-schedule&diff=12846220b-winter-2012/final-rehearsal-schedule2012-03-12T23:59:34Z<p>Srsmith: </p>
<hr />
<div>= Final Project Presentation Rehearsal Schedule =<br />
<br />
Location: CCRMA Stage, 3rd floor, The Knoll<br />
<br />
Date/Time: Monday, March 18, 10am-3pm<br />
<br />
== Time Slots ==<br />
<br />
Please indicate your name, number of audio channels, whether or not you need video, and any other technical needs. <br />
<br />
* 9:00 - Francois, 8, video, DK<br />
* 9:20 - (name, number of channels, video/no video)<br />
* 9:40 - (Cecilia, 2 or 8, video/no video(not sure yet))<br />
* 10:00 - (, number of channels, video/no video)<br />
* 10:20 - Sarah, 4 , no video<br />
* 10:40 - (name, number of channels, video/no video)<br />
* 10:00 - (name, number of channels, video/no video)<br />
* 10:20 - (name, number of channels, video/no video)<br />
* 10:40 - (name, number of channels, video/no video)<br />
* 11:00 - Micah, 2, video<br />
* 11:20 - Jennifer, 2, no video (for now..)<br />
* 11:40 - (name, number of channels, video/no video)<br />
* 12:00 - JP, 2, No video<br />
* 12:20 - Jeff, 2, no video, mic for input<br />
* 12:40 - (name, number of channels, video/no video)<br />
* 1:00 - (name, number of channels, video/no video)<br />
* 1:20 - Jiffer, 2-chan, no vid<br />
* 1:40 - (name, number of channels, video/no video)<br />
* 2:00 - (name, number of channels, video/no video)<br />
* 2:20 - (name, number of channels, video/no video)<br />
* 2:40 - (name, number of channels, video/no video)</div>Srsmithhttps://ccrma.stanford.edu/mediawiki/index.php?title=220a-fall-2011&diff=12755220a-fall-20112011-11-29T20:06:39Z<p>Srsmith: </p>
<hr />
<div>= Final Presentation =<br />
<br />
* web page due and presentation on Thursday, December 15 3:30pm<br />
<br />
<br />
Final projects in 220a are wide open in content. New music, ideas involving sound, upgrades to Chuck itself, graphics interaction (Processing language), anything goes. The only requirement is to do the sound and music programming in Chuck so your code is shareable with the rest of the class. You'll have 8 minutes to set up, play and present your work at our final meeting. It'll be a long evening and Chris will cook. These are usually pretty fun marathons and guests are welcome.<br />
<br />
We'll be up in the Stage from 3:30pm and there is a signup wiki for presentation order. Anyone with questions about a project idea or how to make one work can contact Chris or the TA's.<br />
<br />
<br />
<br />
== Final Presentation Sign Up ==<br />
<br />
1. Abrey Mann<br />
<br />
2. Max Praglin<br />
<br />
3. Timothy Wong<br />
<br />
4. Sidharth Kumar<br />
<br />
5. Kurt Werner<br />
<br />
6. Danny Organ<br />
<br />
7. Mayank Sanganeria<br />
<br />
8. Francois Germain<br />
<br />
9. Cecilia Wu<br />
<br />
10. Melvin Low<br />
<br />
11. JP Wright<br />
<br />
12. Kevin Chau<br />
<br />
13. Sarah Smith<br />
<br />
14.<br />
<br />
15.<br />
<br />
16.<br />
<br />
17.<br />
<br />
18.<br />
<br />
19.<br />
<br />
20.<br />
<br />
21. <br />
<br />
22.<br />
<br />
23.<br />
<br />
24.<br />
<br />
25.<br />
<br />
26.<br />
<br />
27.<br />
<br />
28.<br />
<br />
29.<br />
<br />
30.<br />
<br />
31.<br />
<br />
32.<br />
<br />
33.<br />
<br />
34.<br />
<br />
35.<br />
<br />
36.<br />
<br />
37.<br />
<br />
== Music Presentation Sign Up ==<br />
<br />
* <span style="color:#ccc">9/29 (THU) : Cecilia Wu, Kurt Werner, Colin Sullivan</span><br />
<br />
* <span style="color:#ccc">10/4 (TUE) : Beau Hye Silver, Kevin Chau, Max Praglin</span><br />
<br />
* <span style="color:#ccc">10/6 (THU) : Duncan Lindsay, Jeff Rowell, Micah Arvey</span><br />
<br />
* <span style="color:#ccc">10/11 (TUE) : Sarah Smith, Max Ryan, Alex Stabile</span><br />
<br />
* <span style="color:#ccc">10/13 (THU) : Timothy Wong, Danny Organ, Afrooz Family</span><br />
<br />
* <span style="color:#ccc">10/18 (TUE) : Aravind Arun, Lydia Zodda, David Sabeti</span><br />
<br />
* <span style="color:#ccc">10/20 (THU) : Derek Tingle, Helen Chavez, Andy Stuhl</span><br />
<br />
* <span style="color:#ccc">10/25 (TUE) : Lulu DeBoer</span><br />
<br />
* <span style="color:#faa">10/27 (THU) : "Monterey Whale Watch!"</span><br />
<br />
* <span style="color:#ccc">11/1 (TUE) : Eri Gamo, Evan Gitterman</span><br />
<br />
* <span style="color:#ccc">11/3 (THU) : Mayank Sanganeria, Sidharth Kumar, Jennifer Hsu</span><br />
<br />
* <span style="color:#ccc">11/8 (TUE) : Calvin Wang, Abrey Mann</span><br />
<br />
* <span style="color:#faa">11/10 (THU) : Joint Lecture with UMich</span><br />
<br />
* <span style="color:#ccc">11/15 (TUE) : JP Wright, Turenas</span><br />
<br />
* <span style="color:#ccc">11/17 (THU) : Chowning lecture</span><br />
<br />
* 11/29 (TUE) : Melvin Low, Sewon Jang<br />
<br />
* 12/1 (THU) : Francois Germain, Matt Weber, Last Chance! <-- sign here!!</div>Srsmithhttps://ccrma.stanford.edu/mediawiki/index.php?title=220a-fall-2011&diff=12372220a-fall-20112011-09-29T16:31:14Z<p>Srsmith: </p>
<hr />
<div>* Your presentation should be more or less 10 minutes!<br />
* Please write down your name under the date you prefer. You can edit this page by clicking '''"edit" link'''. If you have a trouble, please contact TAs to modify this page for you.<br />
<br />
<br />
== Music Presentation Sign Up ==<br />
<br />
* 9/29 (THU) : Cecilia Wu, Kurt Werner, Colin Sullivan<br />
<br />
* 10/4 (TUE)<br />
<br />
* 10/6 (THU): Duncan Lindsay, Jeff Rowell<br />
<br />
* 10/11 (TUE): Sarah Smith<br />
<br />
* 10/13 (THU)<br />
<br />
* 10/18 (TUE)<br />
<br />
* 10/20 (THU)<br />
<br />
* 10/25 (TUE)<br />
<br />
* 10/27 (THU)<br />
<br />
* 11/1 (TUE)<br />
<br />
* 11/3 (THU)<br />
<br />
* 11/8 (TUE)<br />
<br />
* 11/10 (THU)<br />
<br />
* 11/15 (TUE)<br />
<br />
* 11/17 (THU)<br />
<br />
* 11/29 (TUE)<br />
<br />
* 12/1 (THU)</div>Srsmith