https://ccrma.stanford.edu/mediawiki/api.php?action=feedcontributions&user=Cforkish&feedformat=atomCCRMA Wiki - User contributions [en]2024-03-28T18:37:02ZUser contributionsMediaWiki 1.24.1https://ccrma.stanford.edu/mediawiki/index.php?title=220c-spring-2010&diff=9984220c-spring-20102010-05-20T17:24:25Z<p>Cforkish: /* Use the Space Below to Link to Your Project Pages/Wikis */</p>
<hr />
<div>[[Category:Courses]]<br />
= [https://ccrma.stanford.edu/courses/220c/ <b>Music 220c</b>] - Research Seminar in Computer-Generated Music =<br />
<br />
== [http://ccrma.stanford.edu/wiki/220c-spring-2010/about <b>About the Class</b>] ==<br />
<br />
<br />
==Use the Space Below to Link to Your Project Pages/Wikis==<br />
Short blurbs and links to project pages:<br />
<br />
* <b>Bjoern Erlach</b> - w/ J. Abel. inter-sampling artifact calibration, acoustic modeling<br />
* [https://ccrma.stanford.edu/~cforkish/220c/<b>Charlie Forkish</b>] - Produce a stage show. Develop a system that will take inputs from each player (of about five), and visualize each input to a freakin' OVERHEAD PROJECTOR(!) — using freq. analysis/tracking, envelope tracking, timbre tracking, etc...<br />
* [https://ccrma.stanford.edu/~hanaboy/220c/ <b>Stephen Henderson</b>] - Alzheimer's helped by medial prefrontal cortex? helped by music?<br />
* [https://ccrma.stanford.edu/~leshg/220C/<b>Grahame Lesh</b>] - Live Video Recording/Editing of a band based on their output.<br />
* [http://ccrma.stanford.edu/~tymaue/220c/ <b>Tyler Maue</b>] - Lüp-It: One-man band loop generator.<br />
* [https://ccrma.stanford.edu/~lmelvin/220c/ <b>Linden Melvin</b>] - Live Sound Synthesis to make a Soundscape<br />
* <b>Dohi Moon</b> - Electronic Music + Animation (String Quartet)<br />
* [https://ccrma.stanford.edu/~craffel/sound/echo <b>Colin Raffel</b>] - Getting Rain Barrels - Live sampling, wireless miking — group music making.<br />
* [https://ccrma.stanford.edu/~mrepper/220c/ <b>Michael Repper</b>] - Bending Music, Spectrograms for Donald Barra's new book, "Shaping Music" <br />
* [https://ccrma.stanford.edu/wiki/Shep421 <b>Adam Shepperd</b>] - Composition for infrasonics. Creation of very low frequency driver using tactile transducers and found satellite dish.<br />
* [https://ccrma.stanford.edu/~isyiwang/220c/ <b>Isaac Wang</b>] - Expansion of 220B project - sonifying twitter updates - Something generative/automated that also sounds good, put interface on server so that people can "tweet" from anywhere. Collaborative.<br />
* [https://ccrma.stanford.edu/~jwitt90/220c/bass/bassweb.htm <b>Jacob Wittenberg</b>] - The Faceless Bass Player. A virtual bass player that can accompany a jazz pianist on a song that the pianist inputs. The first step in a to-be-extended-and-improved model for jazz bass players.<br />
*[https://ccrma.stanford.edu/~xiangzh/Site/Site/220C.html <b>Xiang Zhang</b>] - w/ J. Abel. 3d modeling, acoustic modeling <br />
<br />
~<br />
<br />
==CONCERT PLANNING==<br />
<br />
<b>Thurs. May 27, 2010</b><br />
<br />
Sound-check SAME DAY.<br />
<b>sound-check order:</b><br />
<br />
*Isaac 3:30pm<br />
*Tyler 4pm<br />
*Jacob 4:30pm<br />
*Linden 5pm<br />
*Graham 5:30pm<br />
<br />
<i>outside</i><br />
<br />
*Adam 6pm<br />
*Colin 6:30pm<br />
<br />
<br />
<b>CONCERT order:</b><br />
*Melvin<br />
*Wittenberg<br />
*Maue<br />
*Bjoern<br />
*Wang<br />
*Graham<br />
<br />
<i>outside</i><br />
<br />
*Adam<br />
*Colin<br />
<br />
<br />
Backyard reqs.<br />
*risers<br />
*screen<br />
*PA<br />
*stage<br />
<br />
===Rehearsal Times:===<br />
*Linden - M 6-7:30, T 6:30-8, W 12-1:30<br />
*Jacob - M 4:15-6, T 10-11<br />
*Tyler - M 7:30-9, W 2:30-4, R 12-1<br />
*Colin - M 3-4, T 5-6:30, W 1:30-2:30<br />
*Adam - W 4-5<br />
*Bjoern - T 9-10<br />
*Isaac - T 8-9, W 5-6,<br />
*Grahame - T 11-12, W 11-12<br />
<br />
[[Image:Rehearse220c_03.jpg]]<br />
<br />
<br />
----<br />
Email [mailto:cc@ccrma.stanford.edu Chris] ~ <br />
Email [mailto:mpberger@ccrma.stanford.edu Michael]</div>Cforkishhttps://ccrma.stanford.edu/mediawiki/index.php?title=220c-spring-2010&diff=9983220c-spring-20102010-05-20T17:22:58Z<p>Cforkish: /* Use the Space Below to Link to Your Project Pages/Wikis */</p>
<hr />
<div>[[Category:Courses]]<br />
= [https://ccrma.stanford.edu/courses/220c/ <b>Music 220c</b>] - Research Seminar in Computer-Generated Music =<br />
<br />
== [http://ccrma.stanford.edu/wiki/220c-spring-2010/about <b>About the Class</b>] ==<br />
<br />
<br />
==Use the Space Below to Link to Your Project Pages/Wikis==<br />
Short blurbs and links to project pages:<br />
<br />
* <b>Bjoern Erlach</b> - w/ J. Abel. inter-sampling artifact calibration, acoustic modeling<br />
* [https://ccrma.stanford.edu/~cforkish/220c/<b>Charlie Forkish</b>] - Produce a stage show. Develop a system that will take inputs from each player (of about five), and visualize each input to [http://www.youtube.com/watch?v=Bh7bYNAHXxw <b>a freakin' LASER BEAM(!)</b>] — using freq. analysis/tracking, envelope tracking, timbre tracking, etc...<br />
* [https://ccrma.stanford.edu/~hanaboy/220c/ <b>Stephen Henderson</b>] - Alzheimer's helped by medial prefrontal cortex? helped by music?<br />
* [https://ccrma.stanford.edu/~leshg/220C/<b>Grahame Lesh</b>] - Live Video Recording/Editing of a band based on their output.<br />
* [http://ccrma.stanford.edu/~tymaue/220c/ <b>Tyler Maue</b>] - Lüp-It: One-man band loop generator.<br />
* [https://ccrma.stanford.edu/~lmelvin/220c/ <b>Linden Melvin</b>] - Live Sound Synthesis to make a Soundscape<br />
* <b>Dohi Moon</b> - Electronic Music + Animation (String Quartet)<br />
* [https://ccrma.stanford.edu/~craffel/sound/echo <b>Colin Raffel</b>] - Getting Rain Barrels - Live sampling, wireless miking — group music making.<br />
* [https://ccrma.stanford.edu/~mrepper/220c/ <b>Michael Repper</b>] - Bending Music, Spectrograms for Donald Barra's new book, "Shaping Music" <br />
* [https://ccrma.stanford.edu/wiki/Shep421 <b>Adam Shepperd</b>] - Composition for infrasonics. Creation of very low frequency driver using tactile transducers and found satellite dish.<br />
* [https://ccrma.stanford.edu/~isyiwang/220c/ <b>Isaac Wang</b>] - Expansion of 220B project - sonifying twitter updates - Something generative/automated that also sounds good, put interface on server so that people can "tweet" from anywhere. Collaborative.<br />
* [https://ccrma.stanford.edu/~jwitt90/220c/bass/bassweb.htm <b>Jacob Wittenberg</b>] - The Faceless Bass Player. A virtual bass player that can accompany a jazz pianist on a song that the pianist inputs. The first step in a to-be-extended-and-improved model for jazz bass players.<br />
*[https://ccrma.stanford.edu/~xiangzh/Site/Site/220C.html <b>Xiang Zhang</b>] - w/ J. Abel. 3d modeling, acoustic modeling <br />
<br />
~<br />
<br />
==CONCERT PLANNING==<br />
<br />
<b>Thurs. May 27, 2010</b><br />
<br />
Sound-check SAME DAY.<br />
<b>sound-check order:</b><br />
<br />
*Isaac 3:30pm<br />
*Tyler 4pm<br />
*Jacob 4:30pm<br />
*Linden 5pm<br />
*Graham 5:30pm<br />
<br />
<i>outside</i><br />
<br />
*Adam 6pm<br />
*Colin 6:30pm<br />
<br />
<br />
<b>CONCERT order:</b><br />
*Melvin<br />
*Wittenberg<br />
*Maue<br />
*Bjoern<br />
*Wang<br />
*Graham<br />
<br />
<i>outside</i><br />
<br />
*Adam<br />
*Colin<br />
<br />
<br />
Backyard reqs.<br />
*risers<br />
*screen<br />
*PA<br />
*stage<br />
<br />
===Rehearsal Times:===<br />
*Linden - M 6-7:30, T 6:30-8, W 12-1:30<br />
*Jacob - M 4:15-6, T 10-11<br />
*Tyler - M 7:30-9, W 2:30-4, R 12-1<br />
*Colin - M 3-4, T 5-6:30, W 1:30-2:30<br />
*Adam - W 4-5<br />
*Bjoern - T 9-10<br />
*Isaac - T 8-9, W 5-6,<br />
*Grahame - T 11-12, W 11-12<br />
<br />
[[Image:Rehearse220c_03.jpg]]<br />
<br />
<br />
----<br />
Email [mailto:cc@ccrma.stanford.edu Chris] ~ <br />
Email [mailto:mpberger@ccrma.stanford.edu Michael]</div>Cforkishhttps://ccrma.stanford.edu/mediawiki/index.php?title=128-spring-2010-Assignment3&diff=9945128-spring-2010-Assignment32010-05-12T10:53:21Z<p>Cforkish: /* Group */</p>
<hr />
<div>== Group (example) ==<br />
<br />
* '''members''': Jieun Oh<br />
* ('''tentative) title of piece''': Converge<br />
* '''summary of piece concept''': collect data (location, time, audio recording, text, pictures, tapping gestures) from performers prior to the concert, and combine the elements into a piece during performance based on audience preference<br />
* '''link to all related files (chuck, audio files, instructions, scores)''' : [http://ccrma.stanford.edu/~jieun5 converge files]<br />
* '''documentation of what you tried as of May 11''': blah blah blah<br />
<br />
== Virtual Handbell Choir ==<br />
<br />
* '''members''': Nick, Daniel, Jay <br />
* ('''tentative) title of piece''': Virtual Handbell Choir (very tentative)<br />
* '''summary of piece concept''': Air handbells for 15 slork stations w/Golf Controlllers, featuring a guitar hero-like GUI which will automatically disseminate the parts of a MIDI song to each Slork station, with each player being in charge of two bells.<br />
* '''link to all related files (chuck, audio files, instructions, scores)''' : <br />
Milestones - <br />
Instrument - Nick, Daniel - Wednesday 12th<br />
Network - Jay - Monday 10th (something done to pass to daniel)<br />
MIDI - Jay - Monday 10th (something done to pass to daniel)<br />
Graphics - Daniel - Monday 17th<br />
INTEGRATION - ALL - Wednesday 19th<br />
Piece - Nick - At least something by Wednesday 19th<br />
<br />
ideas for network: <br />
1.) songs stored on server - client downloads midi, parses - determines what it needs and displays that<br />
2.) songs stored on server - server parses song - client looks to server for specific part it needs - downloads and displays that<br />
<br />
ideas for graphics:<br />
- flies at you at an angle like rock band<br />
- hits are circles, higher velocity = larger circle + different color<br />
- long rectangle for the fast dinging thing<br />
- lower horizontal bar to show you need to dampen<br />
<br />
ideas for instrument:<br />
- dampen to your chest and/or with pedal - at least have pedal be a "kill switch" in case of error <br />
- use STK shaker code to excite the bell sound<br />
<br />
* '''documentation of what you tried as of May 11''':<br />
<br />
** '''Nick''': processing of handbell samples to transpose into 3 full octaves and basic, functioning control with controller<br />
** '''Daniel''':<br />
** '''Jay''':<br />
<br />
**'''plan to integrate further for basic instrument wednesday'''<br />
<br />
== Group Awesome ==<br />
<br />
* '''members''': Giancarlo Daniele, Ben Holtz, Linden Melvin<br />
* ('''tentative) title of piece''': Sampling Machine<br />
* '''summary of piece concept''': Our goal is to use sampling as a form of expression. Each slork station will have twenty samples at their disposal (mapped to keyboard keys) from iconic songs defining decades in American music. Each machine is an "instrument" capable of taking the samples and passing it through a chuck effect or filter. Our piece has a few tentative components 1) A game component, where one station plays one sample at a time and passes it to another station, which has to play that sample, add a new sample, and pass it along. 2) A score-d component, where each station will read keypresses, etc from a score 3) An improvised component, where a conductor points stations that are responsible for playing certain parts.<br />
* '''link to all related files (chuck, audio files, instructions, scores)''' : <br />
* '''documentation of what you tried as of May 11''': As a group, we've come up with a variety of different samples, "instruments" (coded chuck w/ sndbuff effects), and a very preliminary idea the various components of our piece.<br />
<br />
== Everybodyeverybody ==<br />
<br />
* '''members''': Alan Hshieh, Aaron Zarraga, Isaac Wang<br />
* ('''tentative) title of piece''': Fanfare for the Common Man<br />
* '''summary of piece concept''': The idea for our piece is that we want to combine "live" coding with a pre-written score in order to simulate a full orchestra. Each SlOrk station will be a different orchestral section and they will have an assigned part to play. The piece will be slow and harmonic to allow each person to physically type in a new note's frequency value for their instrument. These coding screens will be projected in order to show off the live coding aspect. We want to create a piece that is interesting to watch as well. We plan to make the smack sensor swap out the current shread with the new one that is currently being typed. Therefore everyone will be striking their instrument at the appropriate times to play each note. This will be awesome to look at, especially since we plan on having some Taiko drums in the orchestra. We are still trying to develop our ideas as far as what we want for the final piece, but we know that we want the song to be epic.<br />
* '''link to all related files (chuck, audio files, instructions, scores)''' : http://hshieh.com/slork<br />
* '''documentation of what you tried as of May 11''': We worked out our concept, figured out a basic rendition of Twinkle Twinkle Little Star to demo in class, and will be meeting May 12th to compose and find tune. <br />
<br />
== noise and headbanging ==<br />
<br />
* '''members''': adam somers, uri nieto, charlie forkish<br />
* ('''tentative) title of piece''': Concerto For Touchboard and Headbang Orchestra<br />
* '''summary of piece concept''': the headbang orchestra will be composed of twelve players wearing GameTrak controller gloves around their necks triggering samples by headbanging. there will be a percussion section composed of two kick drum players, one snare drum player, and four hi-hat players. there will be three guitar players each on a different power chord, and two more each triggering a different riff or shred. the soloists will be adam playing noisy awesome on the touchboard and uri modulating adam's awesome by headbanging and hairswirling. charlie will be conducting the piece.<br />
* '''link to all related files (chuck, audio files, instructions, scores)''' : [https://ccrma.stanford.edu/~cforkish/slork/concerto.zip concerto.zip]<br />
* '''documentation of what you tried as of May 11''': we have written a basic chuck patch for the headbang orchestra to trigger their samples, a basic patch to modulate noise by headbanging, and a rough outline of what the headbang orchestra will be playing underneath the soloists.<br />
<br />
== Group ==<br />
<br />
* '''members''': <br />
* ('''tentative) title of piece''': <br />
* '''summary of piece concept''':<br />
* '''link to all related files (chuck, audio files, instructions, scores)''' : <br />
* '''documentation of what you tried as of May 11''':<br />
<br />
== Group ==<br />
<br />
* '''members''': <br />
* ('''tentative) title of piece''': <br />
* '''summary of piece concept''':<br />
* '''link to all related files (chuck, audio files, instructions, scores)''' : <br />
* '''documentation of what you tried as of May 11''':<br />
<br />
== Group ==<br />
<br />
* '''members''': <br />
* ('''tentative) title of piece''': <br />
* '''summary of piece concept''':<br />
* '''link to all related files (chuck, audio files, instructions, scores)''' : <br />
* '''documentation of what you tried as of May 11''':</div>Cforkishhttps://ccrma.stanford.edu/mediawiki/index.php?title=128-spring-2010-Assignment3&diff=9944128-spring-2010-Assignment32010-05-12T10:44:03Z<p>Cforkish: /* Group */</p>
<hr />
<div>== Group (example) ==<br />
<br />
* '''members''': Jieun Oh<br />
* ('''tentative) title of piece''': Converge<br />
* '''summary of piece concept''': collect data (location, time, audio recording, text, pictures, tapping gestures) from performers prior to the concert, and combine the elements into a piece during performance based on audience preference<br />
* '''link to all related files (chuck, audio files, instructions, scores)''' : [http://ccrma.stanford.edu/~jieun5 converge files]<br />
* '''documentation of what you tried as of May 11''': blah blah blah<br />
<br />
== Virtual Handbell Choir ==<br />
<br />
* '''members''': Nick, Daniel, Jay <br />
* ('''tentative) title of piece''': Virtual Handbell Choir (very tentative)<br />
* '''summary of piece concept''': Air handbells for 15 slork stations w/Golf Controlllers, featuring a guitar hero-like GUI which will automatically disseminate the parts of a MIDI song to each Slork station, with each player being in charge of two bells.<br />
* '''link to all related files (chuck, audio files, instructions, scores)''' : <br />
Milestones - <br />
Instrument - Nick, Daniel - Wednesday 12th<br />
Network - Jay - Monday 10th (something done to pass to daniel)<br />
MIDI - Jay - Monday 10th (something done to pass to daniel)<br />
Graphics - Daniel - Monday 17th<br />
INTEGRATION - ALL - Wednesday 19th<br />
Piece - Nick - At least something by Wednesday 19th<br />
<br />
ideas for network: <br />
1.) songs stored on server - client downloads midi, parses - determines what it needs and displays that<br />
2.) songs stored on server - server parses song - client looks to server for specific part it needs - downloads and displays that<br />
<br />
ideas for graphics:<br />
- flies at you at an angle like rock band<br />
- hits are circles, higher velocity = larger circle + different color<br />
- long rectangle for the fast dinging thing<br />
- lower horizontal bar to show you need to dampen<br />
<br />
ideas for instrument:<br />
- dampen to your chest and/or with pedal - at least have pedal be a "kill switch" in case of error <br />
- use STK shaker code to excite the bell sound<br />
<br />
* '''documentation of what you tried as of May 11''':<br />
<br />
** '''Nick''': processing of handbell samples to transpose into 3 full octaves and basic, functioning control with controller<br />
** '''Daniel''':<br />
** '''Jay''':<br />
<br />
**'''plan to integrate further for basic instrument wednesday'''<br />
<br />
== Group Awesome ==<br />
<br />
* '''members''': Giancarlo Daniele, Ben Holtz, Linden Melvin<br />
* ('''tentative) title of piece''': Sampling Machine<br />
* '''summary of piece concept''': Our goal is to use sampling as a form of expression. Each slork station will have twenty samples at their disposal (mapped to keyboard keys) from iconic songs defining decades in American music. Each machine is an "instrument" capable of taking the samples and passing it through a chuck effect or filter. Our piece has a few tentative components 1) A game component, where one station plays one sample at a time and passes it to another station, which has to play that sample, add a new sample, and pass it along. 2) A score-d component, where each station will read keypresses, etc from a score 3) An improvised component, where a conductor points stations that are responsible for playing certain parts.<br />
* '''link to all related files (chuck, audio files, instructions, scores)''' : <br />
* '''documentation of what you tried as of May 11''': As a group, we've come up with a variety of different samples, "instruments" (coded chuck w/ sndbuff effects), and a very preliminary idea the various components of our piece.<br />
<br />
== Everybodyeverybody ==<br />
<br />
* '''members''': Alan Hshieh, Aaron Zarraga, Isaac Wang<br />
* ('''tentative) title of piece''': Fanfare for the Common Man<br />
* '''summary of piece concept''': The idea for our piece is that we want to combine "live" coding with a pre-written score in order to simulate a full orchestra. Each SlOrk station will be a different orchestral section and they will have an assigned part to play. The piece will be slow and harmonic to allow each person to physically type in a new note's frequency value for their instrument. These coding screens will be projected in order to show off the live coding aspect. We want to create a piece that is interesting to watch as well. We plan to make the smack sensor swap out the current shread with the new one that is currently being typed. Therefore everyone will be striking their instrument at the appropriate times to play each note. This will be awesome to look at, especially since we plan on having some Taiko drums in the orchestra. We are still trying to develop our ideas as far as what we want for the final piece, but we know that we want the song to be epic.<br />
* '''link to all related files (chuck, audio files, instructions, scores)''' : http://hshieh.com/slork<br />
* '''documentation of what you tried as of May 11''': We worked out our concept, figured out a basic rendition of Twinkle Twinkle Little Star to demo in class, and will be meeting May 12th to compose and find tune. <br />
<br />
== Group ==<br />
<br />
* '''members''': adam somers, uri nieto, charlie forkish<br />
* ('''tentative) title of piece''': Concerto For Touchboard and Headbang Orchestra<br />
* '''summary of piece concept''': the headbang orchestra will be composed of twelve players wearing GameTrak controller gloves around their necks triggering samples by headbanging. there will be a percussion section composed of two kick drum players, one snare drum player, and four hi-hat players. there will be three guitar players each on a different power chord, and two more each triggering a different riff or shred. the soloists will be adam playing noisy awesome on the touchboard and uri modulating adam's awesome by headbanging and hairswirling. charlie will be conducting the piece.<br />
* '''link to all related files (chuck, audio files, instructions, scores)''' : [https://ccrma.stanford.edu/~cforkish/slork/concerto.zip concerto.zip]<br />
* '''documentation of what you tried as of May 11''': we have written a basic chuck patch for the headbang orchestra to trigger their samples, a basic patch to modulate the touchboard coming into the adc by headbanging, and a rough outline of what the headbang orchestra will be playing underneath the soloists.<br />
<br />
== Group ==<br />
<br />
* '''members''': <br />
* ('''tentative) title of piece''': <br />
* '''summary of piece concept''':<br />
* '''link to all related files (chuck, audio files, instructions, scores)''' : <br />
* '''documentation of what you tried as of May 11''':<br />
<br />
== Group ==<br />
<br />
* '''members''': <br />
* ('''tentative) title of piece''': <br />
* '''summary of piece concept''':<br />
* '''link to all related files (chuck, audio files, instructions, scores)''' : <br />
* '''documentation of what you tried as of May 11''':<br />
<br />
== Group ==<br />
<br />
* '''members''': <br />
* ('''tentative) title of piece''': <br />
* '''summary of piece concept''':<br />
* '''link to all related files (chuck, audio files, instructions, scores)''' : <br />
* '''documentation of what you tried as of May 11''':</div>Cforkishhttps://ccrma.stanford.edu/mediawiki/index.php?title=DIA&diff=9267DIA2009-11-11T23:20:59Z<p>Cforkish: /* Design */</p>
<hr />
<div>= DIA: Do It A Cappella =<br />
<br />
== Idea / Premise ==<br />
* a visual feedback system for a cappella singers to use alone or together as an ensemble<br />
* a learning tool to train the voice and the ears<br />
<br />
<br />
== Motivation ==<br />
* a large population of amateur a cappella singers who lack the necessary ear training to effectively self-correct<br />
* the need for an objective method of performance evaluation to eliminate confusion over "who's hearing it right"<br />
* a desire to explore basic real-time audio information retrieval techniques<br />
<br />
<br />
== What is DIA? ==<br />
* an application for the real-time visualization of musical properties of one or more singers<br />
* a network application allowing multiple singers to use laptops to see personalized visualizations provided by a host computer<br />
* a vocal training application providing both real-time visualization of error and post-performance reports of achievement statistics<br />
* a music learning tool for people who can't read music<br />
<br />
<br />
== Design ==<br />
* audio input will all be into one host computer<br />
* networked client computers will be able to receive channel-specific customized visualizations via OSC<br />
* performance error will be calculated with respect to user-provided midis of arrangements<br />
* pitch information will be determined by implementing a version of the YIN pitch detection algorithm <br />
* tonal/vowel information will be provided by fft visualizations<br />
<br />
== Testing ==<br />
* the harmonics, a stanford a cappella group, will be able to put DIA through comprehensive, rigorous testing<br />
* effectiveness of execution will be judged based on the final product's ability to provide accurate and useful visual feedback to the user<br />
<br />
<br />
== Team ==<br />
* the DIA team is comprised of charlie forkish and jay bhat<br />
<br />
<br />
== Milestones ==<br />
* DATE 1: 11/15 - accurate and robust implementation of YIN pitch detection algorithm with one input and playback of midi files<br />
* DATE 2: 11/25 - support for multiple inputs, polished visual feedback provided to client computers over network<br />
* DATE 3: 12/4 - tone evaluation and achievement statistics report card</div>Cforkishhttps://ccrma.stanford.edu/mediawiki/index.php?title=DIA&diff=9266DIA2009-11-11T23:18:35Z<p>Cforkish: /* Milestones */</p>
<hr />
<div>= DIA: Do It A Cappella =<br />
<br />
== Idea / Premise ==<br />
* a visual feedback system for a cappella singers to use alone or together as an ensemble<br />
* a learning tool to train the voice and the ears<br />
<br />
<br />
== Motivation ==<br />
* a large population of amateur a cappella singers who lack the necessary ear training to effectively self-correct<br />
* the need for an objective method of performance evaluation to eliminate confusion over "who's hearing it right"<br />
* a desire to explore basic real-time audio information retrieval techniques<br />
<br />
<br />
== What is DIA? ==<br />
* an application for the real-time visualization of musical properties of one or more singers<br />
* a network application allowing multiple singers to use laptops to see personalized visualizations provided by a host computer<br />
* a vocal training application providing both real-time visualization of error and post-performance reports of achievement statistics<br />
* a music learning tool for people who can't read music<br />
<br />
<br />
== Design ==<br />
* audio input will all be into one host computer<br />
* networked client computers will be able to receive channel-specific customized visualizations via OSC<br />
* performance error will be calculated with respect to user-provided midis of arrangements<br />
* pitch information will be determined by implementing a version of the YIN pitch detection algorithm <br />
* tonal/vowel information be provided by fft visualizations<br />
<br />
<br />
== Testing ==<br />
* the harmonics, a stanford a cappella group, will be able to put DIA through comprehensive, rigorous testing<br />
* effectiveness of execution will be judged based on the final product's ability to provide accurate and useful visual feedback to the user<br />
<br />
<br />
== Team ==<br />
* the DIA team is comprised of charlie forkish and jay bhat<br />
<br />
<br />
== Milestones ==<br />
* DATE 1: 11/15 - accurate and robust implementation of YIN pitch detection algorithm with one input and playback of midi files<br />
* DATE 2: 11/25 - support for multiple inputs, polished visual feedback provided to client computers over network<br />
* DATE 3: 12/4 - tone evaluation and achievement statistics report card</div>Cforkishhttps://ccrma.stanford.edu/mediawiki/index.php?title=256a-fall-2009/final&diff=9260256a-fall-2009/final2009-11-11T07:56:10Z<p>Cforkish: </p>
<hr />
<div>= Music 256a | Final Projects =<br />
<br />
[[Growl Hero]]<br />
<br><br />
[http://ccrma.stanford.edu/~adam/256a/project The Insaniac]<br />
<br><br />
[[DIA]]</div>Cforkishhttps://ccrma.stanford.edu/mediawiki/index.php?title=DIA&diff=9258DIA2009-11-11T07:55:14Z<p>Cforkish: moved Dia to DIA</p>
<hr />
<div>= DIA: Do It A Cappella =<br />
<br />
== Idea / Premise ==<br />
* a visual feedback system for a cappella singers to use alone or together as an ensemble<br />
* a learning tool to train the voice and the ears<br />
<br />
<br />
== Motivation ==<br />
* a large population of amateur a cappella singers who lack the necessary ear training to effectively self-correct<br />
* the need for an objective method of performance evaluation to eliminate confusion over "who's hearing it right"<br />
* a desire to explore basic real-time audio information retrieval techniques<br />
<br />
<br />
== What is DIA? ==<br />
* an application for the real-time visualization of musical properties of one or more singers<br />
* a network application allowing multiple singers to use laptops to see personalized visualizations provided by a host computer<br />
* a vocal training application providing both real-time visualization of error and post-performance reports of achievement statistics<br />
* a music learning tool for people who can't read music<br />
<br />
<br />
== Design ==<br />
* audio input will all be into one host computer<br />
* networked client computers will be able to receive channel-specific customized visualizations via OSC<br />
* performance error will be calculated with respect to user-provided midis of arrangements<br />
* pitch information will be determined by implementing a version of the YIN pitch detection algorithm <br />
* tonal/vowel information be provided by fft visualizations<br />
<br />
<br />
== Testing ==<br />
* the harmonics, a stanford a cappella group, will be able to put DIA through comprehensive, rigorous testing<br />
* effectiveness of execution will be judged based on the final product's ability to provide accurate and useful visual feedback to the user<br />
<br />
<br />
== Team ==<br />
* the DIA team is comprised of charlie forkish and jay bhat<br />
<br />
<br />
== Milestones ==<br />
* DATE 1: 11/18 - accurate and robust implementation of YIN pitch detection algorithm with one input and playback of midi files<br />
* DATE 2: 11/25 - support for multiple inputs, polished visual feedback provided to client computers over network<br />
* DATE 3: 12/4 - tone evaluation and achievement statistics report card</div>Cforkishhttps://ccrma.stanford.edu/mediawiki/index.php?title=Dia&diff=9259Dia2009-11-11T07:55:14Z<p>Cforkish: moved Dia to DIA</p>
<hr />
<div>#REDIRECT [[DIA]]</div>Cforkishhttps://ccrma.stanford.edu/mediawiki/index.php?title=256a-fall-2009/final&diff=9257256a-fall-2009/final2009-11-11T07:54:27Z<p>Cforkish: </p>
<hr />
<div>= Music 256a | Final Projects =<br />
<br />
[[Growl Hero]]<br />
<br><br />
[http://ccrma.stanford.edu/~adam/256a/project The Insaniac]<br />
<br><br />
[[Dia]]</div>Cforkishhttps://ccrma.stanford.edu/mediawiki/index.php?title=DIA&diff=9255DIA2009-11-11T07:54:08Z<p>Cforkish: moved 256a-fall-2009/final/dia to Dia</p>
<hr />
<div>= DIA: Do It A Cappella =<br />
<br />
== Idea / Premise ==<br />
* a visual feedback system for a cappella singers to use alone or together as an ensemble<br />
* a learning tool to train the voice and the ears<br />
<br />
<br />
== Motivation ==<br />
* a large population of amateur a cappella singers who lack the necessary ear training to effectively self-correct<br />
* the need for an objective method of performance evaluation to eliminate confusion over "who's hearing it right"<br />
* a desire to explore basic real-time audio information retrieval techniques<br />
<br />
<br />
== What is DIA? ==<br />
* an application for the real-time visualization of musical properties of one or more singers<br />
* a network application allowing multiple singers to use laptops to see personalized visualizations provided by a host computer<br />
* a vocal training application providing both real-time visualization of error and post-performance reports of achievement statistics<br />
* a music learning tool for people who can't read music<br />
<br />
<br />
== Design ==<br />
* audio input will all be into one host computer<br />
* networked client computers will be able to receive channel-specific customized visualizations via OSC<br />
* performance error will be calculated with respect to user-provided midis of arrangements<br />
* pitch information will be determined by implementing a version of the YIN pitch detection algorithm <br />
* tonal/vowel information be provided by fft visualizations<br />
<br />
<br />
== Testing ==<br />
* the harmonics, a stanford a cappella group, will be able to put DIA through comprehensive, rigorous testing<br />
* effectiveness of execution will be judged based on the final product's ability to provide accurate and useful visual feedback to the user<br />
<br />
<br />
== Team ==<br />
* the DIA team is comprised of charlie forkish and jay bhat<br />
<br />
<br />
== Milestones ==<br />
* DATE 1: 11/18 - accurate and robust implementation of YIN pitch detection algorithm with one input and playback of midi files<br />
* DATE 2: 11/25 - support for multiple inputs, polished visual feedback provided to client computers over network<br />
* DATE 3: 12/4 - tone evaluation and achievement statistics report card</div>Cforkishhttps://ccrma.stanford.edu/mediawiki/index.php?title=256a-fall-2009/final/dia&diff=9256256a-fall-2009/final/dia2009-11-11T07:54:08Z<p>Cforkish: moved 256a-fall-2009/final/dia to Dia</p>
<hr />
<div>#REDIRECT [[Dia]]</div>Cforkishhttps://ccrma.stanford.edu/mediawiki/index.php?title=256a-fall-2009/final&diff=9254256a-fall-2009/final2009-11-11T07:52:50Z<p>Cforkish: </p>
<hr />
<div>= Music 256a | Final Projects =<br />
<br />
[[Growl Hero]]<br />
<br><br />
[http://ccrma.stanford.edu/~adam/256a/project The Insaniac]<br />
<br><br />
[[Dia: Do It A Cappella]]</div>Cforkishhttps://ccrma.stanford.edu/mediawiki/index.php?title=256a-fall-2009/final&diff=9253256a-fall-2009/final2009-11-11T07:52:27Z<p>Cforkish: </p>
<hr />
<div>= Music 256a | Final Projects =<br />
<br />
[[Growl Hero]]<br />
<br><br />
[http://ccrma.stanford.edu/~adam/256a/project The Insaniac]<br />
<br><br />
[[/Dia: Do It A Cappella]]</div>Cforkishhttps://ccrma.stanford.edu/mediawiki/index.php?title=256a-fall-2009/final&diff=9252256a-fall-2009/final2009-11-11T07:50:55Z<p>Cforkish: </p>
<hr />
<div>= Music 256a | Final Projects =<br />
<br />
[[Growl Hero]]<br />
<br><br />
[http://ccrma.stanford.edu/~adam/256a/project The Insaniac]<br />
<br><br />
[[Dia: Do It A Cappella]]</div>Cforkishhttps://ccrma.stanford.edu/mediawiki/index.php?title=DIA&diff=9250DIA2009-11-11T06:53:43Z<p>Cforkish: Created page with '= DIA: Do It A Cappella = == Idea / Premise == * a visual feedback system for a cappella singers to use alone or together as an ensemble * a learning tool to train the voice and…'</p>
<hr />
<div>= DIA: Do It A Cappella =<br />
<br />
== Idea / Premise ==<br />
* a visual feedback system for a cappella singers to use alone or together as an ensemble<br />
* a learning tool to train the voice and the ears<br />
<br />
<br />
== Motivation ==<br />
* a large population of amateur a cappella singers who lack the necessary ear training to effectively self-correct<br />
* the need for an objective method of performance evaluation to eliminate confusion over "who's hearing it right"<br />
* a desire to explore basic real-time audio information retrieval techniques<br />
<br />
<br />
== What is DIA? ==<br />
* an application for the real-time visualization of musical properties of one or more singers<br />
* a network application allowing multiple singers to use laptops to see personalized visualizations provided by a host computer<br />
* a vocal training application providing both real-time visualization of error and post-performance reports of achievement statistics<br />
* a music learning tool for people who can't read music<br />
<br />
<br />
== Design ==<br />
* audio input will all be into one host computer<br />
* networked client computers will be able to receive channel-specific customized visualizations via OSC<br />
* performance error will be calculated with respect to user-provided midis of arrangements<br />
* pitch information will be determined by implementing a version of the YIN pitch detection algorithm <br />
* tonal/vowel information be provided by fft visualizations<br />
<br />
<br />
== Testing ==<br />
* the harmonics, a stanford a cappella group, will be able to put DIA through comprehensive, rigorous testing<br />
* effectiveness of execution will be judged based on the final product's ability to provide accurate and useful visual feedback to the user<br />
<br />
<br />
== Team ==<br />
* the DIA team is comprised of charlie forkish and jay bhat<br />
<br />
<br />
== Milestones ==<br />
* DATE 1: 11/18 - accurate and robust implementation of YIN pitch detection algorithm with one input and playback of midi files<br />
* DATE 2: 11/25 - support for multiple inputs, polished visual feedback provided to client computers over network<br />
* DATE 3: 12/4 - tone evaluation and achievement statistics report card</div>Cforkishhttps://ccrma.stanford.edu/mediawiki/index.php?title=220a-fall-2009/studentmusic&diff=8785220a-fall-2009/studentmusic2009-10-01T22:22:58Z<p>Cforkish: /* SIGN-UP for MUSIC Presentations on wiki */</p>
<hr />
<div>== SIGN-UP for MUSIC Presentations on wiki ==<br />
* Thurs. Sept. 24 = Dohi & Adam<br />
* Tues. Sept. 29 = Adam Somers, Uri Nieto, Zach Brand<br />
* Thurs. Oct. 1 = Sarah Masimore, Nick Kruge, Jacqueline Gordon<br />
* Tues. Oct. 6 = Ben Cunningham, Charlie Forkish<br />
* Thurs. Oct. 8 = Jacob Wittenberg<br />
* Tues. Oct. 13 = Ben Roth <br />
* Thurs. Oct. 15 = <br />
* Tues. Oct. 20 = Colin Raffel<br />
* Thurs. Oct. 22 =<br />
* Tues. Oct. 27 = Matt Bush<br />
* Thurs. Oct. 29 = Andrew Plan<br />
* Tues. Nov. 3 =<br />
* Thurs. Nov. 5 =<br />
* Tues. Nov. 10 =<br />
* Thurs. Nov. 12 = John Bauer<br />
* Tues. Nov. 17 =<br />
<br />
<br />
<br />
--<br />
* [http://cm-wiki.stanford.edu/wiki/220a-fall-2009 Back to 220a wiki Page]</div>Cforkish