https://ccrma.stanford.edu/mediawiki/api.php?action=feedcontributions&user=Jsadural&feedformat=atomCCRMA Wiki - User contributions [en]2024-03-29T12:20:08ZUser contributionsMediaWiki 1.24.1https://ccrma.stanford.edu/mediawiki/index.php?title=223-spring-2010&diff=9923223-spring-20102010-05-07T22:35:39Z<p>Jsadural: /* Announcements */</p>
<hr />
<div>* [[220a-fall-2007/FAQ|Frequently Asked Questions]]<br />
* [[223-spring-2010/musicAnalysis|Electro-acoustic Analysis Presentations]]<br />
* [[220a-spring-2010/finalprojects|FINAL PROJECTS WIKI]]<br />
<br />
<br />
== labs + assignments on wiki ==<br />
<br />
* [[223-spring-2010/StudentLinks|Student Homework submission links]]<br />
* [[223-spring-2010/hw1|Assignment #1]]<br />
* [[223-spring-2010/hw2|Assignment #2]]<br />
* [[223-spring-2010/hw3|Assignment #3]]<br />
* [[223-spring-2010/hw4|Assignment #4 (for 2, 3, & 4 unit registrants)]]<br />
<br />
== Announcements ==<br />
==== April 23, 2010 ====<br />
<br />
I look forward to seeing you in class today, and then for our lunch session chez me next week.<br />
<br />
A few things...<br />
<br />
*I. Please continue to work on your masterpieces for our final concert.<br />
<br />
*II. If you are taking the course for 2, 3, or 4 credits, please complete your masterpieces before the May 21 jury. (Some folks will present instead on the May 28 jury, but it has yet to be determined who is on which day.)<br />
<br />
*III. If you are taking the course for 3 or 4 credits we are looking forward to your splendidly rich and insightful analyses of a piece of electroacoustic music of your choice. Presentations should range from 10-15 minutes in duration, including listening to the piece (whether excerpts or in full).<br />
<br />
==== May 10, 2010 @ 1:00pm ====<br />
<br />
Granular Synthesis using Supercollider workshop<br />
<br />
* I will be showing how to create granular effects using supercollider to manipulate samples. I will be explaining concepts of the supercollider server and language, midi and osc controls, and granular effects using supercollider. My purpose will be to get you going and making "sounds" as quickly as possible on linux while building interest and curiosity into supercollider to pursue further endeavors in algorithmic composition. <br />
<br />
* Stay tuned for the "Spatialization" session<br />
<br />
== useful resources ==<br />
<br />
<br />
<br />
[[Category: Courses]]</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=223-spring-2010/StudentSubmissions&diff=9889223-spring-2010/StudentSubmissions2010-04-27T22:41:21Z<p>Jsadural: moved 223-spring-2010/StudentLinks to 223-spring-2010/StudentSubmissions</p>
<hr />
<div>== Homework 1 ==<br />
* [[223-spring-2010/SampleStudent|YourNameHere]]<br />
== Homework 2 ==<br />
== Homework 3 ==<br />
== Homework 4 ==</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=223-spring-2010/StudentLinks&diff=9890223-spring-2010/StudentLinks2010-04-27T22:41:21Z<p>Jsadural: moved 223-spring-2010/StudentLinks to 223-spring-2010/StudentSubmissions</p>
<hr />
<div>#REDIRECT [[223-spring-2010/StudentSubmissions]]</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=223-spring-2010/StudentSubmissions&diff=9888223-spring-2010/StudentSubmissions2010-04-27T22:40:56Z<p>Jsadural: Created page with '== Homework 1 == * YourNameHere == Homework 2 == == Homework 3 == == Homework 4 =='</p>
<hr />
<div>== Homework 1 ==<br />
* [[223-spring-2010/SampleStudent|YourNameHere]]<br />
== Homework 2 ==<br />
== Homework 3 ==<br />
== Homework 4 ==</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=223-spring-2010&diff=9887223-spring-20102010-04-27T22:34:25Z<p>Jsadural: /* labs + assignments on wiki */</p>
<hr />
<div>* [[220a-fall-2007/FAQ|Frequently Asked Questions]]<br />
* [[223-spring-2010/musicAnalysis|Electro-acoustic Analysis Presentations]]<br />
* [[220a-spring-2010/finalprojects|FINAL PROJECTS WIKI]]<br />
<br />
<br />
== labs + assignments on wiki ==<br />
<br />
* [[223-spring-2010/StudentLinks|Student Homework submission links]]<br />
* [[223-spring-2010/hw1|Assignment #1]]<br />
* [[223-spring-2010/hw2|Assignment #2]]<br />
* [[223-spring-2010/hw3|Assignment #3]]<br />
* [[223-spring-2010/hw4|Assignment #4 (for 2, 3, & 4 unit registrants)]]<br />
<br />
== Announcements ==<br />
==== April 23, 2010 ====<br />
<br />
I look forward to seeing you in class today, and then for our lunch session chez me next week.<br />
<br />
A few things...<br />
<br />
*I. Please continue to work on your masterpieces for our final concert.<br />
<br />
*II. If you are taking the course for 2, 3, or 4 credits, please complete your masterpieces before the May 21 jury. (Some folks will present instead on the May 28 jury, but it has yet to be determined who is on which day.)<br />
<br />
*III. If you are taking the course for 3 or 4 credits we are looking forward to your splendidly rich and insightful analyses of a piece of electroacoustic music of your choice. Presentations should range from 10-15 minutes in duration, including listening to the piece (whether excerpts or in full).<br />
<br />
== useful resources ==<br />
<br />
<br />
<br />
[[Category: Courses]]</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=223-spring-2010&diff=9886223-spring-20102010-04-27T22:33:27Z<p>Jsadural: /* labs + assignments on wiki */</p>
<hr />
<div>* [[220a-fall-2007/FAQ|Frequently Asked Questions]]<br />
* [[223-spring-2010/musicAnalysis|Electro-acoustic Analysis Presentations]]<br />
* [[220a-spring-2010/finalprojects|FINAL PROJECTS WIKI]]<br />
<br />
<br />
== labs + assignments on wiki ==<br />
* [[223-spring-2010/hw1|Assignment #1]]<br />
* [[223-spring-2010/hw2|Assignment #2]]<br />
* [[223-spring-2010/hw3|Assignment #3]]<br />
* [[223-spring-2010/hw4|Assignment #4 (for 2, 3, & 4 unit registrants)]]<br />
<br />
* [[223-spring-2010/StudentLinks|Student Homework submission links]]<br />
<br />
== Announcements ==<br />
==== April 23, 2010 ====<br />
<br />
I look forward to seeing you in class today, and then for our lunch session chez me next week.<br />
<br />
A few things...<br />
<br />
*I. Please continue to work on your masterpieces for our final concert.<br />
<br />
*II. If you are taking the course for 2, 3, or 4 credits, please complete your masterpieces before the May 21 jury. (Some folks will present instead on the May 28 jury, but it has yet to be determined who is on which day.)<br />
<br />
*III. If you are taking the course for 3 or 4 credits we are looking forward to your splendidly rich and insightful analyses of a piece of electroacoustic music of your choice. Presentations should range from 10-15 minutes in duration, including listening to the piece (whether excerpts or in full).<br />
<br />
== useful resources ==<br />
<br />
<br />
<br />
[[Category: Courses]]</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=223-spring-2010/musicAnalysis&diff=9885223-spring-2010/musicAnalysis2010-04-27T22:30:41Z<p>Jsadural: Created page with '== May 7 == *Michael Berger *Noah Burbank *Lauchlan Casey *Blair Foley (if registered for 3 or 4 units) == May 14 == *Jens Joller *JJ O'Brien *Jason Sadurai *Adam Sheppard'</p>
<hr />
<div>== May 7 ==<br />
<br />
*Michael Berger<br />
*Noah Burbank<br />
*Lauchlan Casey<br />
*Blair Foley (if registered for 3 or 4 units)<br />
<br />
== May 14 ==<br />
<br />
*Jens Joller<br />
*JJ O'Brien<br />
*Jason Sadurai<br />
*Adam Sheppard</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=223-spring-2010&diff=9884223-spring-20102010-04-27T22:30:34Z<p>Jsadural: </p>
<hr />
<div>* [[220a-fall-2007/FAQ|Frequently Asked Questions]]<br />
* [[223-spring-2010/musicAnalysis|Electro-acoustic Analysis Presentations]]<br />
* [[220a-spring-2010/finalprojects|FINAL PROJECTS WIKI]]<br />
<br />
<br />
== labs + assignments on wiki ==<br />
* [[223-spring-2010/hw1|Assignment #1]]<br />
* [[223-spring-2010/hw2|Assignment #2]]<br />
* [[223-spring-2010/hw3|Assignment #3]]<br />
* [[223-spring-2010/hw4|Assignment #4 (for 2, 3, & 4 unit registrants)]]<br />
<br />
== Announcements ==<br />
==== April 23, 2010 ====<br />
<br />
I look forward to seeing you in class today, and then for our lunch session chez me next week.<br />
<br />
A few things...<br />
<br />
*I. Please continue to work on your masterpieces for our final concert.<br />
<br />
*II. If you are taking the course for 2, 3, or 4 credits, please complete your masterpieces before the May 21 jury. (Some folks will present instead on the May 28 jury, but it has yet to be determined who is on which day.)<br />
<br />
*III. If you are taking the course for 3 or 4 credits we are looking forward to your splendidly rich and insightful analyses of a piece of electroacoustic music of your choice. Presentations should range from 10-15 minutes in duration, including listening to the piece (whether excerpts or in full).<br />
<br />
== useful resources ==<br />
<br />
<br />
<br />
[[Category: Courses]]</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=223-spring-2010/hw4&diff=9883223-spring-2010/hw42010-04-27T22:28:18Z<p>Jsadural: Created page with '== Due on May 7 (for 2, 3, & 4 unit registrants) == Create a rhythmic study (30 seconds to 2 minutes in length) by applying at least three rhythmic devices discussed in class (a…'</p>
<hr />
<div>== Due on May 7 (for 2, 3, & 4 unit registrants) ==<br />
<br />
Create a rhythmic study (30 seconds to 2 minutes in length) by applying at least three rhythmic devices discussed in class (as well as other techniques that you might imagine) to one or more simple, short sound sources of your choice (e.g., a click, a vocal phoneme, a cough, a door slam). The one noteworthy restriction is that you must do at least one thing that (a) feels contrary to your usual working method (at least in part); and (b) produces a music that is significantly different from what you might typically make. The piece should be submitted on audio CD and be labeled with your name.</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=223-spring-2010&diff=9882223-spring-20102010-04-27T22:27:35Z<p>Jsadural: /* labs + assignments on wiki */</p>
<hr />
<div>* [[220a-fall-2007/FAQ|Frequently Asked Questions]]<br />
* [[220a-spring-2010/musicAnalysis|Electro-acoustic Analysis Presentations]]<br />
* [[220a-spring-2010/finalprojects|FINAL PROJECTS WIKI]]<br />
<br />
<br />
== labs + assignments on wiki ==<br />
* [[223-spring-2010/hw1|Assignment #1]]<br />
* [[223-spring-2010/hw2|Assignment #2]]<br />
* [[223-spring-2010/hw3|Assignment #3]]<br />
* [[223-spring-2010/hw4|Assignment #4 (for 2, 3, & 4 unit registrants)]]<br />
<br />
== Announcements ==<br />
==== April 23, 2010 ====<br />
<br />
I look forward to seeing you in class today, and then for our lunch session chez me next week.<br />
<br />
A few things...<br />
<br />
*I. Please continue to work on your masterpieces for our final concert.<br />
<br />
*II. If you are taking the course for 2, 3, or 4 credits, please complete your masterpieces before the May 21 jury. (Some folks will present instead on the May 28 jury, but it has yet to be determined who is on which day.)<br />
<br />
*III. If you are taking the course for 3 or 4 credits we are looking forward to your splendidly rich and insightful analyses of a piece of electroacoustic music of your choice. Presentations should range from 10-15 minutes in duration, including listening to the piece (whether excerpts or in full).<br />
<br />
== useful resources ==<br />
<br />
<br />
<br />
[[Category: Courses]]</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=223-spring-2010&diff=9881223-spring-20102010-04-27T22:27:24Z<p>Jsadural: /* labs + assignments on wiki */</p>
<hr />
<div>* [[220a-fall-2007/FAQ|Frequently Asked Questions]]<br />
* [[220a-spring-2010/musicAnalysis|Electro-acoustic Analysis Presentations]]<br />
* [[220a-spring-2010/finalprojects|FINAL PROJECTS WIKI]]<br />
<br />
<br />
== labs + assignments on wiki ==<br />
* [[223-spring-2010/hw1|Assignment #1]]<br />
* [[223-spring-2010/hw2|Assignment #2]]<br />
* [[223-spring-2010/hw3|Assignment #3]]<br />
* [[223-spring-2010/hw4|Assignment #4 (for 2, 3, & 4 unit registrants)]]<br />
<br />
== Announcements ==<br />
==== April 23, 2010 ====<br />
<br />
I look forward to seeing you in class today, and then for our lunch session chez me next week.<br />
<br />
A few things...<br />
<br />
*I. Please continue to work on your masterpieces for our final concert.<br />
<br />
*II. If you are taking the course for 2, 3, or 4 credits, please complete your masterpieces before the May 21 jury. (Some folks will present instead on the May 28 jury, but it has yet to be determined who is on which day.)<br />
<br />
*III. If you are taking the course for 3 or 4 credits we are looking forward to your splendidly rich and insightful analyses of a piece of electroacoustic music of your choice. Presentations should range from 10-15 minutes in duration, including listening to the piece (whether excerpts or in full).<br />
<br />
== useful resources ==<br />
<br />
<br />
<br />
[[Category: Courses]]</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=223-spring-2010/hw3&diff=9880223-spring-2010/hw32010-04-27T22:26:07Z<p>Jsadural: Created page with '== Please complete one of the following two exercises for next week (April 30) == *I. Plan a contrapuntal texture for two (or more) voices, one that might serve as the pre-compo…'</p>
<hr />
<div>== Please complete one of the following two exercises for next week (April 30) ==<br />
<br />
*I. Plan a contrapuntal texture for two (or more) voices, one that might serve as the pre-compositional plan for a work (or passage of a work) that could be realized later. Be sure to consider differences in foreground and background, and ways in which a listener might attend to each voice at different times.<br />
<br />
*II. Make a pre-compositional formal plan for a piece by overlaying multi-colored lines, each describing a particular parameter or musical issue. First, label the time line (x axis) of your graph. Then choose at least five parameters or issues you wish to describe. Make a creative and sensible overlapping of lines, a formal plan that might be realized in a future piece.</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=220a-spring-2010/musicAnalysis&diff=9879220a-spring-2010/musicAnalysis2010-04-27T22:23:47Z<p>Jsadural: Created page with '== May 7 == *Michael Berger *Noah Burbank *Lauchlan Casey *Blair Foley (if registered for 3 or 4 units) == May 14 == *Jens Joller *JJ O'Brien *Jason Sadurai *Adam Sheppard'</p>
<hr />
<div>== May 7 ==<br />
<br />
*Michael Berger<br />
*Noah Burbank<br />
*Lauchlan Casey<br />
*Blair Foley (if registered for 3 or 4 units)<br />
<br />
== May 14 ==<br />
<br />
*Jens Joller<br />
*JJ O'Brien<br />
*Jason Sadurai<br />
*Adam Sheppard</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=223-spring-2010&diff=9878223-spring-20102010-04-27T22:23:02Z<p>Jsadural: </p>
<hr />
<div>* [[220a-fall-2007/FAQ|Frequently Asked Questions]]<br />
* [[220a-spring-2010/musicAnalysis|Electro-acoustic Analysis Presentations]]<br />
* [[220a-spring-2010/finalprojects|FINAL PROJECTS WIKI]]<br />
<br />
<br />
== labs + assignments on wiki ==<br />
* [[223-spring-2010/hw1|Assignment #1]]<br />
* [[223-spring-2010/hw2|Assignment #2]]<br />
* [[223-spring-2010/hw3|Assignment #3]]<br />
<br />
== Announcements ==<br />
==== April 23, 2010 ====<br />
<br />
I look forward to seeing you in class today, and then for our lunch session chez me next week.<br />
<br />
A few things...<br />
<br />
*I. Please continue to work on your masterpieces for our final concert.<br />
<br />
*II. If you are taking the course for 2, 3, or 4 credits, please complete your masterpieces before the May 21 jury. (Some folks will present instead on the May 28 jury, but it has yet to be determined who is on which day.)<br />
<br />
*III. If you are taking the course for 3 or 4 credits we are looking forward to your splendidly rich and insightful analyses of a piece of electroacoustic music of your choice. Presentations should range from 10-15 minutes in duration, including listening to the piece (whether excerpts or in full).<br />
<br />
== useful resources ==<br />
<br />
<br />
<br />
[[Category: Courses]]</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=223-spring-2010&diff=9877223-spring-20102010-04-27T22:21:43Z<p>Jsadural: /* April 23, 2010 */</p>
<hr />
<div>* [[220a-fall-2007/FAQ|Frequently Asked Questions]]<br />
* [[220a-spring-2010/studentmusic|student music presentations]]<br />
* [[220a-spring-2010/finalprojects|FINAL PROJECTS WIKI]]<br />
<br />
<br />
== labs + assignments on wiki ==<br />
* [[223-spring-2010/hw1|Assignment #1]]<br />
* [[223-spring-2010/hw2|Assignment #2]]<br />
* [[223-spring-2010/hw3|Assignment #3]]<br />
<br />
== Announcements ==<br />
==== April 23, 2010 ====<br />
<br />
I look forward to seeing you in class today, and then for our lunch session chez me next week.<br />
<br />
A few things...<br />
<br />
*I. Please continue to work on your masterpieces for our final concert.<br />
<br />
*II. If you are taking the course for 2, 3, or 4 credits, please complete your masterpieces before the May 21 jury. (Some folks will present instead on the May 28 jury, but it has yet to be determined who is on which day.)<br />
<br />
*III. If you are taking the course for 3 or 4 credits we are looking forward to your splendidly rich and insightful analyses of a piece of electroacoustic music of your choice. Presentations should range from 10-15 minutes in duration, including listening to the piece (whether excerpts or in full).<br />
<br />
== useful resources ==<br />
<br />
<br />
<br />
[[Category: Courses]]</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=223-spring-2010&diff=9876223-spring-20102010-04-27T22:21:14Z<p>Jsadural: /* April 23, 2010 */</p>
<hr />
<div>* [[220a-fall-2007/FAQ|Frequently Asked Questions]]<br />
* [[220a-spring-2010/studentmusic|student music presentations]]<br />
* [[220a-spring-2010/finalprojects|FINAL PROJECTS WIKI]]<br />
<br />
<br />
== labs + assignments on wiki ==<br />
* [[223-spring-2010/hw1|Assignment #1]]<br />
* [[223-spring-2010/hw2|Assignment #2]]<br />
* [[223-spring-2010/hw3|Assignment #3]]<br />
<br />
== Announcements ==<br />
==== April 23, 2010 ====<br />
<br />
I look forward to seeing you in class today, and then for our lunch<br />
session chez me next week.<br />
<br />
A few things...<br />
<br />
*I. Please continue to work on your masterpieces for our final concert.<br />
<br />
*II. If you are taking the course for 2, 3, or 4 credits, please<br />
complete your masterpieces before the May 21 jury. (Some folks will<br />
present instead on the May 28 jury, but it has yet to be determined<br />
who is on which day.)<br />
<br />
*III. If you are taking the course for 3 or 4 credits we are looking<br />
forward to your splendidly rich and insightful analyses of a piece of<br />
electroacoustic music of your choice. Presentations should range<br />
from 10-15 minutes in duration, including listening to the piece<br />
(whether excerpts or in full).<br />
<br />
== useful resources ==<br />
<br />
<br />
<br />
[[Category: Courses]]</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=223-spring-2010&diff=9875223-spring-20102010-04-27T22:20:12Z<p>Jsadural: /* April 23, 2010 */</p>
<hr />
<div>* [[220a-fall-2007/FAQ|Frequently Asked Questions]]<br />
* [[220a-spring-2010/studentmusic|student music presentations]]<br />
* [[220a-spring-2010/finalprojects|FINAL PROJECTS WIKI]]<br />
<br />
<br />
== labs + assignments on wiki ==<br />
* [[223-spring-2010/hw1|Assignment #1]]<br />
* [[223-spring-2010/hw2|Assignment #2]]<br />
* [[223-spring-2010/hw3|Assignment #3]]<br />
<br />
== Announcements ==<br />
==== April 23, 2010 ====<br />
<br />
== useful resources ==<br />
<br />
<br />
<br />
[[Category: Courses]]</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=223-spring-2010&diff=9874223-spring-20102010-04-27T22:19:51Z<p>Jsadural: </p>
<hr />
<div>* [[220a-fall-2007/FAQ|Frequently Asked Questions]]<br />
* [[220a-spring-2010/studentmusic|student music presentations]]<br />
* [[220a-spring-2010/finalprojects|FINAL PROJECTS WIKI]]<br />
<br />
<br />
== labs + assignments on wiki ==<br />
* [[223-spring-2010/hw1|Assignment #1]]<br />
* [[223-spring-2010/hw2|Assignment #2]]<br />
* [[223-spring-2010/hw3|Assignment #3]]<br />
<br />
== Announcements ==<br />
=== April 23, 2010 === <br />
<br />
== useful resources ==<br />
<br />
<br />
<br />
[[Category: Courses]]</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=223-spring-2010&diff=9873223-spring-20102010-04-27T22:19:16Z<p>Jsadural: </p>
<hr />
<div>* [[220a-fall-2007/FAQ|Frequently Asked Questions]]<br />
* [[220a-spring-2010/studentmusic|student music presentations]]<br />
* [[220a-spring-2010/finalprojects|FINAL PROJECTS WIKI]]<br />
<br />
<br />
== labs + assignments on wiki ==<br />
* [[223-spring-2010/hw1|Assignment #1]]<br />
* [[223-spring-2010/hw2|Assignment #2]]<br />
* [[223-spring-2010/hw3|Assignment #3]]<br />
<br />
== Announcements ==<br />
= April 23, 2010 = <br />
<br />
== useful resources ==<br />
<br />
<br />
<br />
[[Category: Courses]]</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=223-spring-2010/hw2&diff=9872223-spring-2010/hw22010-04-27T22:16:38Z<p>Jsadural: /* Homework #2: "Transformations" */</p>
<hr />
<div>= Homework #2: "Transformations" =<br />
<br />
Thanks for your very spirited and imaginative contributions to<br />
today's class. I think we have a really great group.<br />
<br />
By way of a reminder, the assignment for next Friday is to create a<br />
"tape" piece--a variation--in response to the given theme.<br />
(Note:"tape" is a misnomer of course, but historically quaint.) <br />
<br />
<br />
=== Specification ===<br />
<br />
*1. The theme, OnceUponATime.WAV, can be found on our coursework site under "materials."<br />
<br />
*2. Completed pieces must be 30 seconds to 2 minutes in duration.<br />
<br />
*3. Pieces must be submitted on audio CD (how nostalgic!) that will play in a commercial CD player, not a computer. A jewel case or sleeve is not required.<br />
<br />
*4. Please label the CD with your name and, optionally, a title.<br />
<br />
*5. Any audio tools may be used to transform the theme into your piece. However, the only sound source you may employ is the given theme; please do not add any additional samples or sound sources.<br />
<br />
That said, the given theme can be warped extensively beyond recognition--or not (as per your individual artistic goals).<br />
<br />
== links to Homework Submissions ==</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=223-spring-2010/hw2&diff=9871223-spring-2010/hw22010-04-27T22:15:57Z<p>Jsadural: /* Specification */</p>
<hr />
<div>= Homework #2: "Transformations" =<br />
<br />
Thanks for your very spirited and imaginative contributions to<br />
today's class. I think we have a really great group.<br />
<br />
By way of a reminder, the assignment for next Friday is to create a<br />
"tape" piece--a variation--in response to the given theme.<br />
(Note:"tape" is a misnomer of course, but historically quaint.) <br />
<br />
<br />
=== Specification ===<br />
<br />
*1. The theme, OnceUponATime.WAV, can be found on our coursework site under "materials."<br />
<br />
*2. Completed pieces must be 30 seconds to 2 minutes in duration.<br />
<br />
*3. Pieces must be submitted on audio CD (how nostalgic!) that will play in a commercial CD player, not a computer. A jewel case or sleeve is not required.<br />
<br />
*4. Please label the CD with your name and, optionally, a title.<br />
<br />
*5. Any audio tools may be used to transform the theme into your piece. However, the only sound source you may employ is the given theme; please do not add any additional samples or sound sources.<br />
<br />
That said, the given theme can be warped extensively beyond recognition--or not (as per your individual artistic goals).</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=223-spring-2010/hw2&diff=9870223-spring-2010/hw22010-04-27T22:15:41Z<p>Jsadural: /* Specification */</p>
<hr />
<div>= Homework #2: "Transformations" =<br />
<br />
Thanks for your very spirited and imaginative contributions to<br />
today's class. I think we have a really great group.<br />
<br />
By way of a reminder, the assignment for next Friday is to create a<br />
"tape" piece--a variation--in response to the given theme.<br />
(Note:"tape" is a misnomer of course, but historically quaint.) <br />
<br />
<br />
=== Specification ===<br />
<br />
*1. The theme, OnceUponATime.WAV, can be found on our coursework site under "materials."<br />
<br />
*2. Completed pieces must be 30 seconds to 2 minutes in duration.<br />
<br />
*3. Pieces must be submitted on audio CD (how nostalgic!) that will play in a commercial CD player, not a computer. A jewel case or sleeve is not required.<br />
<br />
*4. Please label the CD with your name and, optionally, a title.<br />
<br />
*5. Any audio tools may be used to transform the theme into your piece. However, the only sound source you may employ is the given theme; please do not add any additional samples or sound sources.<br />
That said, the given theme can be warped extensively beyond recognition--or not (as per your individual artistic goals).</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=223-spring-2010/hw2&diff=9869223-spring-2010/hw22010-04-27T22:15:29Z<p>Jsadural: </p>
<hr />
<div>= Homework #2: "Transformations" =<br />
<br />
Thanks for your very spirited and imaginative contributions to<br />
today's class. I think we have a really great group.<br />
<br />
By way of a reminder, the assignment for next Friday is to create a<br />
"tape" piece--a variation--in response to the given theme.<br />
(Note:"tape" is a misnomer of course, but historically quaint.) <br />
<br />
<br />
=== Specification ===<br />
<br />
*1. The theme, OnceUponATime.WAV, can be found on our coursework site under "materials."<br />
<br />
*2. Completed pieces must be 30 seconds to 2 minutes in duration.<br />
<br />
**3. Pieces must be submitted on audio CD (how nostalgic!) that will play in a commercial CD player, not a computer. A jewel case or sleeve is not required.<br />
<br />
*4. Please label the CD with your name and, optionally, a title.<br />
<br />
*5. Any audio tools may be used to transform the theme into your piece. However, the only sound source you may employ is the given theme; please do not add any additional samples or sound sources.<br />
That said, the given theme can be warped extensively beyond recognition--or not (as per your individual artistic goals).</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=223-spring-2010/hw2&diff=9868223-spring-2010/hw22010-04-27T22:14:29Z<p>Jsadural: Created page with '= Homework #2: "Transformations" = Thanks for your very spirited and imaginative contributions to today's class. I think we have a really great group. By way of a reminder, th…'</p>
<hr />
<div>= Homework #2: "Transformations" =<br />
<br />
Thanks for your very spirited and imaginative contributions to<br />
today's class. I think we have a really great group.<br />
<br />
By way of a reminder, the assignment for next Friday is to create a<br />
"tape" piece--a variation--in response to the given theme.<br />
(Note:"tape" is a misnomer of course, but historically quaint.) <br />
<br />
<br />
=== Specification ===<br />
<br />
**1. The theme, OnceUponATime.WAV, can be found on our coursework site<br />
under "materials."<br />
<br />
**2. Completed pieces must be 30 seconds to 2 minutes in duration.<br />
<br />
**3. Pieces must be submitted on audio CD (how nostalgic!) that will<br />
play in a commercial CD player, not a computer. A jewel case or<br />
sleeve is not required.<br />
<br />
**4. Please label the CD with your name and, optionally, a title.<br />
<br />
**5. Any audio tools may be used to transform the theme into your<br />
piece. However, the only sound source you may employ is the given<br />
theme; please do not add any additional samples or sound sources.<br />
That said, the given theme can be warped extensively beyond<br />
recognition--or not (as per your individual artistic goals).</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=223-spring-2010&diff=9867223-spring-20102010-04-27T22:09:14Z<p>Jsadural: /* labs + assignments on wiki */</p>
<hr />
<div><br />
* [[220a-fall-2007/FAQ|Frequently Asked Questions]]<br />
* [[220a-spring-2010/studentmusic|student music presentations]]<br />
* [[220a-spring-2010/finalprojects|FINAL PROJECTS WIKI]]<br />
<br />
<br />
== labs + assignments on wiki ==<br />
* [[223-spring-2010/hw1|Assignment #1]]<br />
* [[223-spring-2010/hw2|Assignment #2]]<br />
* [[223-spring-2010/hw3|Assignment #3]]<br />
<br />
== useful resources ==<br />
<br />
<br />
<br />
[[Category: Courses]]</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=223-spring-2010/hw1&diff=9866223-spring-2010/hw12010-04-27T22:08:40Z<p>Jsadural: Created page with '= Homework #1: "Bring an example of you Work" = For those who have some prior experience, please consider bringing an example of your work--an excerpt of a piece of music (eithe…'</p>
<hr />
<div>= Homework #1: "Bring an example of you Work" =<br />
<br />
For those who have some prior experience, please consider bringing an<br />
example of your work--an excerpt of a piece of music (either recorded<br />
or performed live), some kind of strange musical device you've<br />
invented, a score, etc. I'm hoping that most of you will be inclined<br />
to do this so that we all have an idea of our collective range of<br />
interests and experiences; that said, it is optional--and there may<br />
not be time to hear from everyone--so don't feel obliged.</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=223-spring-2010&diff=9865223-spring-20102010-04-27T21:58:21Z<p>Jsadural: /* labs + assignments on wiki */</p>
<hr />
<div><br />
* [[220a-fall-2007/FAQ|Frequently Asked Questions]]<br />
* [[220a-spring-2010/studentmusic|student music presentations]]<br />
* [[220a-spring-2010/finalprojects|FINAL PROJECTS WIKI]]<br />
<br />
<br />
== labs + assignments on wiki ==<br />
* [[223-spring-2010/hw1|Assignment #1]]<br />
<br />
== useful resources ==<br />
<br />
<br />
<br />
[[Category: Courses]]</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=223-spring-2010&diff=9864223-spring-20102010-04-27T21:55:29Z<p>Jsadural: Created page with ' * Frequently Asked Questions * student music presentations * FINAL PROJECTS WIKI ==…'</p>
<hr />
<div><br />
* [[220a-fall-2007/FAQ|Frequently Asked Questions]]<br />
* [[220a-spring-2010/studentmusic|student music presentations]]<br />
* [[220a-spring-2010/finalprojects|FINAL PROJECTS WIKI]]<br />
<br />
<br />
== labs + assignments on wiki ==<br />
<br />
<br />
<br />
== useful resources ==<br />
<br />
<br />
<br />
[[Category: Courses]]</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=FallConcert09&diff=9251FallConcert092009-11-11T07:14:15Z<p>Jsadural: </p>
<hr />
<div>'''Visda Goudarzi- Junkmail'''<br />
<br />
<br />
''Tech Needs''<br />
Projection, speakers: prefer stage audio system or similar<br />
<br />
''Samples:''<br />
https://ccrma.stanford.edu/~visda/junkmail2.wav<br />
<br />
''Description:''<br />
Junkmail is an audio-visual piece as a reaction to the fact that "it takes more than 100 million trees to produce the total volume of junk mail that arrives in American mailboxes each year."<br />
<br />
''Bio:''<br />
Visda Goudarzi is a Computer Musician interested in software development for computer music, human- computer interaction, gesture based interfaces, computer graphics and the application of new media in art. She <br />
recently graduated in the MA/MST program at CCRMA. She has also a Masters degree in Computer Science from TU Vienna and plays piano.<br />
<br />
<br />
'''Sweat Shop Boys - Adam Somers and Sean Price'''<br />
<br />
''Tech Needs:'' 2 Speakers/Amp, subwoofer preferred, 2 1/4" outputs, table, power. We<br />
are happy to play in a reverberant space<br />
<br />
''Website:'' http://sweatshopboys.com/<br />
<br />
''Sounds:'' http://sweatshopboys.com/?page=sounds<br />
<br />
''Description:'' Slowly evolving, dark, drone/ambient<br />
<br />
''Bio:'' The Sweat Shop Boys are a drone act formed in 2005 at CalArts by<br />
Adam Somers and Sean Price. Over the years they have refined a<br />
vocabulary for improvised noise and ambient performance using analog<br />
modular synthesizers and custom software. Sean Price currently<br />
resides in Oakland, CA and is attending Mills College in the<br />
Electronic Music MFA program. Adam Somers resides in Palo Alto and is<br />
attending Stanford University in CCRMA's MA/MST program.<br />
<br />
<br />
<br />
'''Hongchan Choi - Fragmenta'''<br />
<br />
* Fragmenta (real-time audiovisual performance OR playback recorded media)<br />
<br />
''Tech Needs: '' just a projector (with D-SUB input and an extension cable perhaps)<br />
<br />
''Sample video: '' http://www.youtube.com/watch?v=Ajz5aF8cbyQ <br />
<br />
''Description: '' The project "Fragmenta" is an aesthetic & experimental approach to creating audiovisual art with rich inter-media interaction. With the notion of "organic binding" between audio and visual objects, the main goal of these series of experiment is to make audiences feel these audiovisual scenes as an united sense. This piece was implemented with two software platforms: Chuck and Processing. OSC(OpenSound Control) was used for inter-connecting two applications. <br />
<br />
* Bio<br />
<br />
Hongchan Choi is a composer/creative coder who is eager to experiment an artistic mixture of music and visual. After years of undergraduate study in information engineering and master's in computer music, now he is a candidate for a doctoral degree(D.M.A.) and also graduate teaching fellow at Dongguk University. Creating a variety of multimedia works such as cross-modal performances and audiovisual installations, he has been participating numerous concerts and exhibitions in Seoul, Korea. <br />
<br />
<br />
<br />
'''Adam Sheppard, Bjorn Erlach, Xiang Zhang - $$$$$ iN tHe BaNK'''<br />
<br />
''Tech Needs'' : Stereo Speaker Set-Up with amplification. <br />
<br />
''Influences'' : Too $hort, Merzbow, Fleetwood Mac<br />
<br />
''Description'' : Noise, Pop, Dirty Rap<br />
<br />
''Bio'' : Adam, Bjorn, and Xiang are currently students at CCRMA. They are good friends and enjoy making music together.<br />
<br />
<br />
'''Carr Wilkerson - LOLFO'''<br />
<br />
''Tech Needs'' - stereo 1/4 in out, 1/2 table space, 3 power outlets, (no video/graphics planned), it would be nice to have a sub.<br />
<br />
''Samples'' [http://dubstep.fm]<br />
<br />
''Description'' - downtempo ambient dubstep<br />
<br />
''Bio'' Carr Wilkerson is a System Administrator at CCMRA specializing in Linux and Mac OS systems. He is a controller and software system builder and sometime performer/impresario. He has a BS in Physics from Tulane University, Master of Arts in Music Science and Technology from Stanford University, and a Master of Engineering in Electrical Engineering from Tulane. In a previous life, he was a US Navy Nuclear Propulsion Engineer (think Scotty).<br />
<br />
<br />
<br />
<br />
'''Steinunn Arnardottir - Put my hands in your pocket project'''<br />
<br />
''Tech Needs''<br />
Speakers, cables from mixer to speaker (XLR or 1/4" jack outputs), table<br />
<br />
''Samples''<br />
http://mp3.breakbeat.is/breakbeat/leopold/demo/put_my_hands_unfinished_demo.mp3<br />
<br />
''Description''<br />
A dj set possibly with some live-ness added to it..<br />
<br />
''Bio''<br />
Steinunn Arnardottir received her B.Sc. degree in Electrical and<br />
Computer Engineering from the University of Iceland in 2006 and a M.A.<br />
in Music, Science and Technology from Stanford's Center for Computer<br />
Research in Music and Acoustics (CCRMA) in 2008.<br />
She is currently working toward a M.Sc. degree in Electrical<br />
Engineering at Stanford University and will graduate in Spring 2010.<br />
<br />
<br />
'''Fernando Lopez-Lezcano - A Very Fractal Cat, unCaged''''<br />
<br />
''Tech Needs'' Stage audio system<br />
<br />
''Samples''<br />
<br />
''Description'' A Very Fractal Cat, unCaged<br />
<br />
This is the second version of the second piece (after “Cat Walk”) of a series of algorithmic performance pieces for pianos, computer and cat that I started working on last year (the proverbial cat walking on a piano keyboard). The performer connects through a keyboard controller to four virtual pianos both directly and through algorithms. Through the piece different note and phrase generation algorithms are triggered by the performer's actions, including markov chains that the virtual cat uses to learn from the performer, fractal melodies, plain scales and trills and other even simpler algorithms. The sound of the pianos is heard directly, and is also processed using spectral, granular and other synthesis techniques at different points in the performance, creating spaces through which the performer moves. A surround environment is created with Ambisonics spatialization, and everything in the piece was written in SuperCollider. <br />
<br />
''Bio''<br />
Fernando López-Lezcano is a composer, performer, lecturer and computer systems administrator at CCRMA, Stanford University. He has been taking care of computing resources there since 1993, including the Planet CCRMA collection of open source sound and music packages for Linux which he created and maintains since 2001. He has been involved in the field of electronic music since 1976 and his music has been released on CD and played in the Americas, Europe and East Asia.<br />
<br />
<br />
'''Cobi Van Tonder - title needed if any'''<br />
<br />
''Tech Needs''<br />
<br />
''Samples''<br />
<br />
''Description''<br />
<br />
<br />
<br />
'''Luke Dahl - The Tom Jonestown Experience'''<br />
<br />
''Tech Needs''<br />
Stereo audio transduction<br />
<br />
''Samples'' <br />
http://www.myspace.com/lukedahl<br />
<br />
''Description'' <br />
Electronic dance-like music performed live for your enjoyment.<br />
<br />
''Bio'' <br />
Luke Dahl is a PhD student at CCRMA whose research interests include physical gestures in new music instruments and musical information retrieval. He also composes and performs electronic dance music.<br />
<br />
<br />
''Bio''<br />
''Tech Needs''<br />
''Samples''<br />
''Description''<br />
''Bio''</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=Openmixer&diff=6749Openmixer2008-12-15T18:44:01Z<p>Jsadural: </p>
<hr />
<div>'''openmixer'''<br />
<br />
A fully-extensible software-based dedicated mixing console emulation. This page is mainly a sketch book for putting such a system together. For the most up to date user interface documentation go [http://ccrma.stanford.edu/~jsadural/openmixer/index.html here ]<br />
==Requirements==<br />
<br />
What do we want to do when we get to the Studio that houses the OpenMixer system:<br />
<br />
<ul><br />
<br />
<li> login to the studio workstation, press a button in the UI that says "Workstation" and have all 24 channels of the soundcard connected to all 24 speakers, control overall volume with a fader labeled "Master"<br />
<br />
<li> walk to the dvd player, insert a DVD disk, press a button in the UI that says "DVD" and have the DVD play in 5.1, control overall volume with a fader labeled "Master"<br />
<br />
<li> connect a laptop for stereo playback by connecting to a 1/8" plug, press a button in the UI that says "Laptop" and have stereo playback properly routed, control overall volume with a fader labeled "Master"<br />
<br />
<li> connect a laptop for 8 channel playback by connecting multiple outputs to 1/4" plugs, press a button in the UI that says "Laptop8" and have 8 channels routed to the mid 8 channel speaker ring, control overall volume with a fader labeled "Master"<br />
<br />
</ul><br />
<br />
==Hardware==<br />
<br />
The OpenMixer software runs in a dedicated fanless computer. Zalman has discontinued their TNN300 and TNN500AF products so we have to look for other options. <br />
<br />
===Host Computer===<br />
<br />
Options for fanless computers:<br />
<br />
Question: should a second 1Gb/s ethernet be a requirement? It would enable a local audio only subnet for jacktrip without any other packets interfering with the audio streams. <br />
<br />
==== TNN500 case ====<br />
<br />
Discontinued...<br />
<br />
==== TNN300 case ====<br />
<br />
Discontinued but still available so it is an option. Advantages: a product we know and works very well. Disadvantages: 70W max power for the CPU so no quad core options, form factor could be smaller (we need something "hidden"). <br />
<br />
==== mCubed case ====<br />
<br />
http://www.endpcnoise.com/cgi-bin/e/std/sku=fanless_mcubed_pc.html?id=gXEitX65<br />
<br />
Review of a computer using the same case:<br />
http://www.silentpcreview.com/article678-page1.html<br />
<br />
==== Logisys computers ====<br />
<br />
http://www.logisysus.com/<br />
<br />
<ul><br />
<br />
<li>Core Duo, fanless, 1 PCI slot ($1025 base price):<br />
https://logisysus.com/catalog/product_info.php?cPath=95&products_id=541<br />
<br />
<li>Core Duo, fanless, 2 PCI slots ($1125 base price):<br />
https://logisysus.com/catalog/product_info.php?cPath=95&products_id=540<br />
<br />
Example: Core Duo 2.16GHz, 2Gb RAM (667), 80G 2.5" 7200RPM hard disk: $1778<br />
<br />
<li>Core Duo, fanless, 1 PCI slot ($775 base price): <br />
https://logisysus.com/catalog/product_info.php?products_id=539<br />
<br />
</ul><br />
<br />
==== Other fanless small computers ====<br />
<br />
<ul><br />
<br />
<li> http://www.smallpc.com/prod_sc240.php (no PCI slots)<br />
<br />
</ul><br />
<br />
===Audio Interface===<br />
<br />
====Requirements====<br />
<br />
<ul><br />
<br />
<li>Outputs<br />
<br />
<ul><br />
<br />
<li>24 analog outputs (speakers - currently 16 used)<br />
<li>24 digital channels (3 x ADAT) to send to computers, etc<br />
<br />
</ul><br />
<br />
<li>Inputs<br />
<br />
<ul><br />
<br />
<li>6 analog inputs channels for dvd or blueray/dvd player<br />
<li>8 analog inputs for external analog sources (laptop, etc)<br />
<li>8 analog microphone inputs through an ADAT pipe<br />
<li>24 digital inputs (3 x ADAT pipes) for main computer<br />
<li>16 digital inputs (2 x ADAT pipes) for external computer (laptop, etc)<br />
<br />
</ul><br />
<br />
</ul><br />
<br />
====Options====<br />
<br />
The audio input and output is controlled by an RME HDSP Madi PCI card (http://www.rme-audio.de/en_products_hdsp_madi.php) which provides 64 inputs, 64 outputs, MIDI i/o, work clock i/o, etc. It uses up 2 back panel slots but is inserted into one PCI slot. Another option is the RME PCIe Madi PCI Express card (http://www.rme-audio.de/en_products_hdspe_madi.php) PCI Express card (same capabilities, different bus). <br />
<br />
<br />
====Solid State Logic interfaces====<br />
<br />
Alpha-Link MADI AX ($3745 @ Sweetwater, http://www.solidstatelogic.com/Music/Xlogic Alpha-Link/index.asp)<br />
<br />
<ul><br />
<li>24 analog i/o<br />
<li>3 x ADAT i/o<br />
</ul><br />
<br />
So, we are missing 2x ADAT i/o plus mic preamps<br />
<br />
We could use a second RME HDSP card in the mixing computer (with a Digiface we get 3 x ADAT i/o). That would need a TNN300 case to have all the required expansion slots. <br />
<br />
====RME ADI-648====<br />
<br />
<ul><br />
<li>8 x ADAT i/o<br />
</ul><br />
<br />
We need (extra):<br />
<br />
<ul><br />
<li>3 ADAT outs to 24 channel analog out<br />
<li>2 ADAT ins from 16 channel analog in (line level)<br />
<li>1 ADAT in from 8 channel analog in (mic level)<br />
</ul><br />
<br />
Free:<br />
<br />
<ul><br />
<li>2 ADAT ins<br />
</ul><br />
<br />
The MADI output is split into ADAT outputs using the RME ADI-648 ADAT/MADI converter (http://www.rme-audio.de/en_products_adi_648.php) (we get 8 ADAT pipes from it). The ADAT pipes can be connected directly to other computers or equipment, or go through ADAT D/A and/or A/D converters to connect to analog equipment.<br />
<br />
===Control Surface===<br />
<br />
The simplest alternative is to use a USB (MIDI through USB) fader box. The Behringer BCF-2000 (http://www.behringer.com/BCF2000/index.cfm) is an inexpensive 8 fader box that could be used as the basic control interface. The Behringer BCR-2000 (http://www.behringer.com/BCR2000/index.cfm) can be used as an auxilliary control interface to do channel assignments. All the interaction between the user and OpenMixer would happen through that interface.<br />
<br />
==Software==<br />
<br />
The OpenMixer computer boots into the control program. At least in the first stage it does not even have a monitor, keyboard and mouse for user interaction (just the control surfaces outlined above). <br />
<br />
The software would consist of several separate processes that can be started from runlevel 3 (or runlevel 5 is a GUI and X are used). <br />
<br />
<ul><br />
<br />
<li>control program<br />
<br />
<li>jackmp server<br />
<br />
<li>jacktrip server<br />
<br />
</ul><br />
<br />
<br />
[[Category:Projects]]</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=Openmixer&diff=6748Openmixer2008-12-15T18:41:55Z<p>Jsadural: </p>
<hr />
<div>'''openmixer'''<br />
<br />
A fully-extensible software-based dedicated mixing console emulation.<br />
For the most up to date user interface documentation go [http://ccrma.stanford.edu/~jsadural/openmixer/index.html here ]<br />
==Requirements==<br />
<br />
What do we want to do when we get to the Studio that houses the OpenMixer system:<br />
<br />
<ul><br />
<br />
<li> login to the studio workstation, press a button in the UI that says "Workstation" and have all 24 channels of the soundcard connected to all 24 speakers, control overall volume with a fader labeled "Master"<br />
<br />
<li> walk to the dvd player, insert a DVD disk, press a button in the UI that says "DVD" and have the DVD play in 5.1, control overall volume with a fader labeled "Master"<br />
<br />
<li> connect a laptop for stereo playback by connecting to a 1/8" plug, press a button in the UI that says "Laptop" and have stereo playback properly routed, control overall volume with a fader labeled "Master"<br />
<br />
<li> connect a laptop for 8 channel playback by connecting multiple outputs to 1/4" plugs, press a button in the UI that says "Laptop8" and have 8 channels routed to the mid 8 channel speaker ring, control overall volume with a fader labeled "Master"<br />
<br />
</ul><br />
<br />
==Hardware==<br />
<br />
The OpenMixer software runs in a dedicated fanless computer. Zalman has discontinued their TNN300 and TNN500AF products so we have to look for other options. <br />
<br />
===Host Computer===<br />
<br />
Options for fanless computers:<br />
<br />
Question: should a second 1Gb/s ethernet be a requirement? It would enable a local audio only subnet for jacktrip without any other packets interfering with the audio streams. <br />
<br />
==== TNN500 case ====<br />
<br />
Discontinued...<br />
<br />
==== TNN300 case ====<br />
<br />
Discontinued but still available so it is an option. Advantages: a product we know and works very well. Disadvantages: 70W max power for the CPU so no quad core options, form factor could be smaller (we need something "hidden"). <br />
<br />
==== mCubed case ====<br />
<br />
http://www.endpcnoise.com/cgi-bin/e/std/sku=fanless_mcubed_pc.html?id=gXEitX65<br />
<br />
Review of a computer using the same case:<br />
http://www.silentpcreview.com/article678-page1.html<br />
<br />
==== Logisys computers ====<br />
<br />
http://www.logisysus.com/<br />
<br />
<ul><br />
<br />
<li>Core Duo, fanless, 1 PCI slot ($1025 base price):<br />
https://logisysus.com/catalog/product_info.php?cPath=95&products_id=541<br />
<br />
<li>Core Duo, fanless, 2 PCI slots ($1125 base price):<br />
https://logisysus.com/catalog/product_info.php?cPath=95&products_id=540<br />
<br />
Example: Core Duo 2.16GHz, 2Gb RAM (667), 80G 2.5" 7200RPM hard disk: $1778<br />
<br />
<li>Core Duo, fanless, 1 PCI slot ($775 base price): <br />
https://logisysus.com/catalog/product_info.php?products_id=539<br />
<br />
</ul><br />
<br />
==== Other fanless small computers ====<br />
<br />
<ul><br />
<br />
<li> http://www.smallpc.com/prod_sc240.php (no PCI slots)<br />
<br />
</ul><br />
<br />
===Audio Interface===<br />
<br />
====Requirements====<br />
<br />
<ul><br />
<br />
<li>Outputs<br />
<br />
<ul><br />
<br />
<li>24 analog outputs (speakers - currently 16 used)<br />
<li>24 digital channels (3 x ADAT) to send to computers, etc<br />
<br />
</ul><br />
<br />
<li>Inputs<br />
<br />
<ul><br />
<br />
<li>6 analog inputs channels for dvd or blueray/dvd player<br />
<li>8 analog inputs for external analog sources (laptop, etc)<br />
<li>8 analog microphone inputs through an ADAT pipe<br />
<li>24 digital inputs (3 x ADAT pipes) for main computer<br />
<li>16 digital inputs (2 x ADAT pipes) for external computer (laptop, etc)<br />
<br />
</ul><br />
<br />
</ul><br />
<br />
====Options====<br />
<br />
The audio input and output is controlled by an RME HDSP Madi PCI card (http://www.rme-audio.de/en_products_hdsp_madi.php) which provides 64 inputs, 64 outputs, MIDI i/o, work clock i/o, etc. It uses up 2 back panel slots but is inserted into one PCI slot. Another option is the RME PCIe Madi PCI Express card (http://www.rme-audio.de/en_products_hdspe_madi.php) PCI Express card (same capabilities, different bus). <br />
<br />
<br />
====Solid State Logic interfaces====<br />
<br />
Alpha-Link MADI AX ($3745 @ Sweetwater, http://www.solidstatelogic.com/Music/Xlogic Alpha-Link/index.asp)<br />
<br />
<ul><br />
<li>24 analog i/o<br />
<li>3 x ADAT i/o<br />
</ul><br />
<br />
So, we are missing 2x ADAT i/o plus mic preamps<br />
<br />
We could use a second RME HDSP card in the mixing computer (with a Digiface we get 3 x ADAT i/o). That would need a TNN300 case to have all the required expansion slots. <br />
<br />
====RME ADI-648====<br />
<br />
<ul><br />
<li>8 x ADAT i/o<br />
</ul><br />
<br />
We need (extra):<br />
<br />
<ul><br />
<li>3 ADAT outs to 24 channel analog out<br />
<li>2 ADAT ins from 16 channel analog in (line level)<br />
<li>1 ADAT in from 8 channel analog in (mic level)<br />
</ul><br />
<br />
Free:<br />
<br />
<ul><br />
<li>2 ADAT ins<br />
</ul><br />
<br />
The MADI output is split into ADAT outputs using the RME ADI-648 ADAT/MADI converter (http://www.rme-audio.de/en_products_adi_648.php) (we get 8 ADAT pipes from it). The ADAT pipes can be connected directly to other computers or equipment, or go through ADAT D/A and/or A/D converters to connect to analog equipment.<br />
<br />
===Control Surface===<br />
<br />
The simplest alternative is to use a USB (MIDI through USB) fader box. The Behringer BCF-2000 (http://www.behringer.com/BCF2000/index.cfm) is an inexpensive 8 fader box that could be used as the basic control interface. The Behringer BCR-2000 (http://www.behringer.com/BCR2000/index.cfm) can be used as an auxilliary control interface to do channel assignments. All the interaction between the user and OpenMixer would happen through that interface.<br />
<br />
==Software==<br />
<br />
The OpenMixer computer boots into the control program. At least in the first stage it does not even have a monitor, keyboard and mouse for user interaction (just the control surfaces outlined above). <br />
<br />
The software would consist of several separate processes that can be started from runlevel 3 (or runlevel 5 is a GUI and X are used). <br />
<br />
<ul><br />
<br />
<li>control program<br />
<br />
<li>jackmp server<br />
<br />
<li>jacktrip server<br />
<br />
</ul><br />
<br />
<br />
[[Category:Projects]]</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=Flute_Tracking&diff=6706Flute Tracking2008-12-12T10:23:48Z<p>Jsadural: </p>
<hr />
<div>'''Flute Tracking with microphones'''<br />
<br />
A flute position tracking technique to capture physical gestures while performing flute<br />
<br />
==Trial 1==<br />
<br />
Tools:<br />
<ul><br />
<li>A stick<br />
<li>2 Panasonic Omni Mics<br />
<li>Electrical tape<br />
</ul><br />
<br />
<ul><br />
<br />
Instructions:<br />
</ul><br />
<br />
<ul><br />
Take a 26 inch stick and tape 2 microphones to the edges. Use the cross correlation peak of the stereo recording to track time delay and determine position. <br />
</ul><br />
<ul><br />
Result:<br />
</ul><br />
<ul><br />
The correlation tracking worked well for hissing noise but could not track a the flute sound well.<br />
</ul><br />
<br />
<br />
<ul><br />
</ul></div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=Open_Source_for_HighSchool_Multimedia_and_Journalism&diff=6705Open Source for HighSchool Multimedia and Journalism2008-12-12T10:03:26Z<p>Jsadural: /* Current equipment */</p>
<hr />
<div>== Project Summary ==<br />
written by Jason Sadural (jsadural@ccrma.stanford.edu)<br />
<br />
Our purpose is to introduce Tennyson High School Multimedia & Journalism Department students concepts of open source tools into their current projects. Current projects include radio screen plays, weekly news reports, educational videos, music composition, and virtual year books. The Multimedia & Journalism department at Tennyson highschool has existed for 3 years and currently is the most prospering department academically and creatively at Tennyson. Many of the topics discussed will be related to the use of free public licensed software for synthesis, recording, and mastering with the latest standards of quality in the music industry. <br />
<br />
<br />
Topics will include digital effects, rhythmic loop based techniques, and multi-track audio.<br />
<br />
== Minimum Requirements ==<br />
<br />
#* Pentium 4 or above<br />
#* Multi-channel soundcard<br />
#* Midi controllers<br />
#* unfinished<br />
<br />
[[Category:Projects]]</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=Open_Source_for_HighSchool_Multimedia_and_Journalism&diff=6704Open Source for HighSchool Multimedia and Journalism2008-12-12T10:01:51Z<p>Jsadural: /* Project Summary */</p>
<hr />
<div>== Project Summary ==<br />
written by Jason Sadural (jsadural@ccrma.stanford.edu)<br />
<br />
Our purpose is to introduce Tennyson High School Multimedia & Journalism Department students concepts of open source tools into their current projects. Current projects include radio screen plays, weekly news reports, educational videos, music composition, and virtual year books. The Multimedia & Journalism department at Tennyson highschool has existed for 3 years and currently is the most prospering department academically and creatively at Tennyson. Many of the topics discussed will be related to the use of free public licensed software for synthesis, recording, and mastering with the latest standards of quality in the music industry. <br />
<br />
<br />
Topics will include digital effects, rhythmic loop based techniques, and multi-track audio.<br />
<br />
== Current equipment ==<br />
<br />
#* G4 Desktop<br />
#* Digital Camcorder<br />
#* 3 shure microphones<br />
#* unfinished<br />
<br />
[[Category:Projects]]</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=File:MainComputer.jpg&diff=6703File:MainComputer.jpg2008-12-12T09:41:44Z<p>Jsadural: </p>
<hr />
<div></div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=File:MainComputer.jpeg&diff=6702File:MainComputer.jpeg2008-12-12T09:40:01Z<p>Jsadural: Openmixer_MainComputer</p>
<hr />
<div>Openmixer_MainComputer</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=File:1_final_220a.jpeg&diff=6701File:1 final 220a.jpeg2008-12-12T09:33:23Z<p>Jsadural: Minimum specification</p>
<hr />
<div>Minimum specification</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=220a-fall-2008/finalprojects&diff=6700220a-fall-2008/finalprojects2008-12-12T09:28:01Z<p>Jsadural: </p>
<hr />
<div>[https://cm-wiki.stanford.edu/wiki/Daniel_Smith_Final_Project_Music_220a_Fall_2008 Final Project!] - Daniel Smith<br />
<br />
[https://cm-wiki.stanford.edu/wiki/Fractal_Computation_in_ChucK_-_The_Julia_Class OMG] - Chris Weil<br />
<br />
[http://ccrma.stanford.edu/~chanson9/220/GRANULATOR.html The Granulator!!!] - Craig Hanson<br />
<br />
[http://ccrma.stanford.edu/~jorgeh/220a/project/index.php 3D Sounds] - Jorge Herrera<br />
<br />
[http://ccrma.stanford.edu/~aeg165 Easy - 8 channel audio composition (by which socks may be rocked off)] - Andy Greenwood<br />
<br />
[https://cm-wiki.stanford.edu/wiki/Siqi_Mou_Final_Project_Music_220a_Fall_2008 Why Are There So Many Me's?- Siqi Mou Music 220A Final Project]<br />
<br />
[http://ccrma.stanford.edu/~jakesb/220a/index.html Give Peace A Chance] - Jakes Bejoy<br />
<br />
[http://ccrma.stanford.edu/~stretto/220a/project/index.html Fun With The Stage] - Grahame Lesh<br />
<br />
[http://ccrma.stanford.edu/~darkowen/220a/project.html Sound Painting] - Mofei Zhu<br />
<br />
[http://sufferforfash.wordpress.com/2008/12/11/in-which-i-hijack-my-own-blog-for-my-music-project Sound-Inspired Jewelry] - Lauren Nguyen<br />
<br />
[http://gtakacs.stanford.edu/~gtakacs/mus220/index.htm The Sound of Chaos] - Gabriel Takacs<br />
<br />
[http://ccrma.stanford.edu/~benolson/snddraw/ sndDraw] - Ben Olson<br />
<br />
[http://ccrma.stanford.edu/~mikegao/index.html Trivial Intelligent Pitch Bend and WaveCut] - Mike Gao<br />
<br />
[http://www.stanford.edu/~rparikh/cgi-bin/music.html PASSWORD: pokeyman...also, ignore the junk surrounding the content] - Ravi "Ravi" Parikh...there is a password to view it, and the password is "pokeyman."<br />
<br />
[http://www.stanford.edu/~ppetroff/final220a/ Dreams of Sleep] - <3 Peter Petroff<br />
<br />
[file:///user/v/vwang/Library/Web/220a/StepBattle.html] - Vivian Wang<br />
<br />
[http://ccrma.stanford.edu/~kapilkm/220a/project.html Sonic Balls ] - Kapil Krishnamurthy<br />
<br />
[http://ccrma.stanford.edu/~hugo/220a/project/index.html Noise => Noise] - Hugo Guo<br />
<br />
[http://ccrma.stanford.edu/~visda/220a/final/index.html chachachoochoo] - Visda Goudarzi<br />
<br />
[http://cm-wiki.stanford.edu/wiki/Openmixer OpenMixer] and [http://cm-wiki.stanford.edu/wiki/Listening_Room_Specs ListeningRoomSpecs] - Jason Sadural<br />
<br />
[http://ccrma.stanford.edu/~kcaluko/220a/finalproject.html Chopped & Screwed] - Kike Aluko</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=220a-fall-2008/finalprojects&diff=6697220a-fall-2008/finalprojects2008-12-12T06:45:30Z<p>Jsadural: </p>
<hr />
<div>[https://cm-wiki.stanford.edu/wiki/Daniel_Smith_Final_Project_Music_220a_Fall_2008 Final Project!] - Daniel Smith<br />
<br />
[https://cm-wiki.stanford.edu/wiki/Fractal_Computation_in_ChucK_-_The_Julia_Class OMG] - Chris Weil<br />
<br />
[http://ccrma.stanford.edu/~chanson9/220/GRANULATOR.html The Granulator!!!] - Craig Hanson<br />
<br />
[http://ccrma.stanford.edu/~jorgeh/220a/project/index.php 3D Sounds] - Jorge Herrera<br />
<br />
[http://ccrma.stanford.edu/~aeg165 Easy - 8 channel audio composition (by which socks may be rocked off)] - Andy Greenwood<br />
<br />
[https://cm-wiki.stanford.edu/wiki/Siqi_Mou_Final_Project_Music_220a_Fall_2008 Why Are There So Many Me's?- Siqi Mou Music 220A Final Project]<br />
<br />
[http://ccrma.stanford.edu/~jakesb/220a/index.html Give Peace A Chance] - Jakes Bejoy<br />
<br />
[http://ccrma.stanford.edu/~stretto/220a/project/index.html Fun With The Stage] - Grahame Lesh<br />
<br />
[http://ccrma.stanford.edu/~darkowen/220a/project.html Sound Painting] - Mofei Zhu<br />
<br />
[http://sufferforfash.wordpress.com/2008/12/11/in-which-i-hijack-my-own-blog-for-my-music-project Sound-Inspired Jewelry] - Lauren Nguyen<br />
<br />
[http://gtakacs.stanford.edu/~gtakacs/mus220/index.htm The Sound of Chaos] - Gabriel Takacs<br />
<br />
[http://ccrma.stanford.edu/~benolson/snddraw/ sndDraw] - Ben Olson<br />
<br />
[http://ccrma.stanford.edu/~mikegao/index.html Trivial Intelligent Pitch Bend and WaveCut] - Mike Gao<br />
<br />
[http://www.stanford.edu/~rparikh/cgi-bin/music.html PASSWORD: pokeyman...also, ignore the junk surrounding the content] - Ravi "Ravi" Parikh...there is a password to view it, and the password is "pokeyman."<br />
<br />
[http://www.stanford.edu/~ppetroff/final220a/ Dreams of Sleep] - <3 Peter Petroff<br />
<br />
[file:///user/v/vwang/Library/Web/220a/StepBattle.html] - Vivian Wang<br />
<br />
[http://ccrma.stanford.edu/~kapilkm/220a/project.html Sonic Balls ] - Kapil Krishnamurthy<br />
<br />
[http://ccrma.stanford.edu/~hugo/220a/project/index.html Noise => Noise] - Hugo Guo<br />
<br />
[http://cm-wiki.stanford.edu/wiki/Openmixer OpenMixer] [http://cm-wiki.stanford.edu/wiki/Listening_Room_Specs ListeningRoomSpecs] - jason Sadural</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=Listening_Room_Specs&diff=6684Listening Room Specs2008-12-12T01:34:43Z<p>Jsadural: /* Welcome to the Pit Page */</p>
<hr />
<div>== Welcome to the Pit Page ==<br />
written by Jason Sadural (jsadural@ccrma.stanford.edu)<br />
comments and suggestions always welcomed<br />
<br />
This page discusses the room dimensions as well as the the process of calibrating the spherical VBAP configuration. It will also discuss the equipment and software used to operate sound as well as how to configure and save settings for your personal needs. If you move any speaker in the room you will be personally responsible to recalculate and recalibrate speaker configuration (in other words "DON"T DO IT!!!").In the future we hope to update this page with a new wave field synthesis installation. In 2008 we will be implementing a new system formally titled Openmixer. <br />
<br />
<br />
<br />
# Room Dimensions<br />
#* 277inches x 292 inches x 86 inches(height)<br />
# There exists a heptagon in the center with 3 steps downwards to a cage in the middle<br />
#* Height of each step = 5.5 inches<br />
#* Length of step = 11.75 inches<br />
#* Length of side of Heptagon at top = 84 inches (same as top)<br />
#* Depth of indentation in ceiling = 15 inches<br />
#* Length of side of Heptagon at cage(bottom) = 63 inches<br />
#* Depth to bottom of cage = 41 inches<br />
#* Closest point of Heptagon to wall = 51 inches<br />
# Equipment and software<br />
#* Samsung 17 inch monitor<br />
#* Linux box<br />
#* RME Hammerfall DSP Digiface<br />
#* Tascam TDIF-1/ADAT interface format converter<br />
#* Tascam DM-3200<br />
#* 5.1 DVD player<br />
#* 8 Mackie HR824<br />
#* 8 Mackie HR624<br />
#* AKG C-414 Mic/Stand/adapter<br />
#* ALSA HDSP<br />
#* Jack <br />
# Before beginning, create your own project by clicking "alt-Project" and cursor down to "store as." Name your project using the Dial and the enter key. After doing so, follow instructions and store at the end of calibration. Please use current calculations of speaker configuration instead of repositioning speakers. <br />
# Current speaker configuration<br />
#* speakers 1 is on channel 1 assigned to output 7, speaker congiguration is coutner-clockwise starting at 0 degrees<br />
#* speakers are in octagonal setup with speakers 92 inches apart<br />
#* Distance to sweetspot from each horizontal speaker is 120.205 inches<br />
#* Verticle speakers distance will be termed in counterclockwise order PhiU1(upper1 azimuth), ThetaU1(upper1 angle), ru1(upper1 distance) and so on(all in degrees). <br />
#* Speaker 1 horizontal = 120.205 <br />
#* ThetaU1 = 46.123 ; PhiU1 = 34.582; rU1 = 88.8147<br />
#* ThetaU2 = 134.578; PhiU2 = 35.213; rU2 = 87.2017<br />
#* ThetaU3 = 225.119; PhiU3 = 29.898; rU3 = 90.2783<br />
#* ThetaU4 = 328.552; PhiU4 = 31.448; rU4 = 91.3406<br />
#* ThetaD1 = 46.429 ; PhiD1 = -61.214; rD1 = 95.1767<br />
#* ThetaD2 = 135.557; PhiD2 = -60.980; rD2 = 95.6595<br />
#* ThetaD3 = 235.089; PhiD3 = -43.200; rD3 = 89.9286 (Possible Error)<br />
#* ThetaD4 = 317.181; PhiD4 = -55.781; rD4 = 98.3265<br />
# Speaker Positioning : In calibration of the speakers, I used a T-square, laser Line tool, Wire, tape measure, and Protractor. I made sure that the sweet spot would be intuitive upon placing the seat into the center of the Pit.<br />
#* Use obvious methods to align horizontal speakers in an octagon at a arbitrary distance measured from the center of cage. <br />
#* Measure 3 distances from each verticle(above/below) speaker to 3 different horizontal speakers <br />
#* Use [http://en.wikipedia.org/wiki/Trilateration trilateral] calculation to solve xyz position in space<br />
#* Use octagon properties to perform coordinate transformation in order to find position from sweet spot(center of octagon)<br />
#* PD patch in which calculates intersecting spheres located through ccrma-gate at /jsadura/trilateration <br />
# Speaker Calibration<br />
#* settings: Acoustic Space "B"half; Low Freq 37Hz; High Freq 0Hz; Power Mode ON<br />
#* Make sure all bus and gain settings are equal on mixer<br />
#* Adjust gain on speaker and use Pink noise to calibrate equal SPL levels at the sweet spot one at a time<br />
# Time Delay<br />
#* Calculate distance to verticle speakers (R) from trilateration page and find the difference in distance to horizontal speakers<br />
#* At a 44.1kHz sample rate, divide distance in inches by 0.3 to calculate samples of delay<br />
#* on Tascam DM-3200 mixer select "Module" on top right and move cursor down to Delay <br />
#* Hold [http://en.wiktionary.org/wiki/shift#Verb shiFt] while turning third knob for smaller increments of delay in samples.<br />
<br />
== Photo of the Listening Room: without speakers ==<br />
<br />
[[Image:Listeningroom.jpg]]<br />
<br />
<br />
[[Category:CCRMA User Guide]]</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=Listening_Room_Specs&diff=6683Listening Room Specs2008-12-12T01:31:58Z<p>Jsadural: /* Welcome to the Pit Page */</p>
<hr />
<div>== Welcome to the Pit Page ==<br />
written by Jason Sadural (jsadural@ccrma.stanford.edu)<br />
comments and suggestions always welcomed<br />
<br />
This page discusses the room dimensions as well as the the process of calibrating the spherical VBAP configuration. It will also discuss the equipment and software used to operate sound as well as how to configure and save settings for your personal needs. If you move any speaker in the room you will be personally responsible to recalculate and recalibrate speaker configuration (in other words "DON"T DO IT!!!"). In the future we hope to update this page with a new wave field synthesis installation. In 2008 we will be implementing a new system formally dubbed: Openmixer <br />
<br />
<br />
# Room Dimensions<br />
#* 277inches x 292 inches x 86 inches(height)<br />
# There exists a heptagon in the center with 3 steps downwards to a cage in the middle<br />
#* Height of each step = 5.5 inches<br />
#* Length of step = 11.75 inches<br />
#* Length of side of Heptagon at top = 84 inches (same as top)<br />
#* Depth of indentation in ceiling = 15 inches<br />
#* Length of side of Heptagon at cage(bottom) = 63 inches<br />
#* Depth to bottom of cage = 41 inches<br />
#* Closest point of Heptagon to wall = 51 inches<br />
# Equipment and software<br />
#* Samsung 17 inch monitor<br />
#* Linux box<br />
#* RME Hammerfall DSP Digiface<br />
#* Tascam TDIF-1/ADAT interface format converter<br />
#* Tascam DM-3200<br />
#* 5.1 DVD player<br />
#* 8 Mackie HR824<br />
#* 8 Mackie HR624<br />
#* AKG C-414 Mic/Stand/adapter<br />
#* ALSA HDSP<br />
#* Jack <br />
# Before beginning, create your own project by clicking "alt-Project" and cursor down to "store as." Name your project using the Dial and the enter key. After doing so, follow instructions and store at the end of calibration. Please use current calculations of speaker configuration instead of repositioning speakers. <br />
# Current speaker configuration<br />
#* speakers 1 is on channel 1 assigned to output 7, speaker congiguration is coutner-clockwise starting at 0 degrees<br />
#* speakers are in octagonal setup with speakers 92 inches apart<br />
#* Distance to sweetspot from each horizontal speaker is 120.205 inches<br />
#* Verticle speakers distance will be termed in counterclockwise order PhiU1(upper1 azimuth), ThetaU1(upper1 angle), ru1(upper1 distance) and so on(all in degrees). <br />
#* Speaker 1 horizontal = 120.205 <br />
#* ThetaU1 = 46.123 ; PhiU1 = 34.582; rU1 = 88.8147<br />
#* ThetaU2 = 134.578; PhiU2 = 35.213; rU2 = 87.2017<br />
#* ThetaU3 = 225.119; PhiU3 = 29.898; rU3 = 90.2783<br />
#* ThetaU4 = 328.552; PhiU4 = 31.448; rU4 = 91.3406<br />
#* ThetaD1 = 46.429 ; PhiD1 = -61.214; rD1 = 95.1767<br />
#* ThetaD2 = 135.557; PhiD2 = -60.980; rD2 = 95.6595<br />
#* ThetaD3 = 235.089; PhiD3 = -43.200; rD3 = 89.9286 (Possible Error)<br />
#* ThetaD4 = 317.181; PhiD4 = -55.781; rD4 = 98.3265<br />
# Speaker Positioning : In calibration of the speakers, I used a T-square, laser Line tool, Wire, tape measure, and Protractor. I made sure that the sweet spot would be intuitive upon placing the seat into the center of the Pit.<br />
#* Use obvious methods to align horizontal speakers in an octagon at a arbitrary distance measured from the center of cage. <br />
#* Measure 3 distances from each verticle(above/below) speaker to 3 different horizontal speakers <br />
#* Use [http://en.wikipedia.org/wiki/Trilateration trilateral] calculation to solve xyz position in space<br />
#* Use octagon properties to perform coordinate transformation in order to find position from sweet spot(center of octagon)<br />
#* PD patch in which calculates intersecting spheres located through ccrma-gate at /jsadura/trilateration <br />
# Speaker Calibration<br />
#* settings: Acoustic Space "B"half; Low Freq 37Hz; High Freq 0Hz; Power Mode ON<br />
#* Make sure all bus and gain settings are equal on mixer<br />
#* Adjust gain on speaker and use Pink noise to calibrate equal SPL levels at the sweet spot one at a time<br />
# Time Delay<br />
#* Calculate distance to verticle speakers (R) from trilateration page and find the difference in distance to horizontal speakers<br />
#* At a 44.1kHz sample rate, divide distance in inches by 0.3 to calculate samples of delay<br />
#* on Tascam DM-3200 mixer select "Module" on top right and move cursor down to Delay <br />
#* Hold [http://en.wiktionary.org/wiki/shift#Verb shiFt] while turning third knob for smaller increments of delay in samples.<br />
<br />
== Photo of the Listening Room: without speakers ==<br />
<br />
[[Image:Listeningroom.jpg]]<br />
<br />
<br />
[[Category:CCRMA User Guide]]</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=Ambisonics_and_Impulse_Response&diff=1676Ambisonics and Impulse Response2007-01-21T04:05:29Z<p>Jsadural: /* Project Summary */</p>
<hr />
<div>== Project Summary ==<br />
written by Jason Sadural (jsadural@ccrma.stanford.edu)<br />
comments and suggestions always welcomed<br />
<br />
The goal is to define a methodology for creating speaker arrays for ambisonic playback.</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=Ambisonic_Theater&diff=1675Ambisonic Theater2007-01-21T04:04:04Z<p>Jsadural: /* Project Summary */</p>
<hr />
<div>== Project Summary ==<br />
written by Jason Sadural (jsadural@ccrma.stanford.edu)<br />
comments and suggestions always welcomed<br />
<br />
In creating Ambisonic compositions, the audio exists as a data set that is represented in a 3D space. The advantages to this architecture is that a piece can be transported to arbitrary locations and be rendered relatively quickly for various speaker configurations maintaining intended spatial properties. Ambisonics soundfield microphones give rise to the ability to record and recreate a "soundfield" in xyzw(w being pressure variable). The wavefronts produced by multi-channel ambisonic playback retains recorded audial cues including room size and reflective properties. In this experiment it is necessary to define a platform in which a 3d graphical representation and ambisonic composition tools are rendered simultaneously in which the user can choose to spatialize visually or algorithmically. The goal is to convincingly create virtual sources interacting with actual recorded soundfields and simultaneously have the virtual image interact with the actual HD/Imax recorded image.</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=Open_Source_for_HighSchool_Multimedia_and_Journalism&diff=1674Open Source for HighSchool Multimedia and Journalism2007-01-21T04:01:37Z<p>Jsadural: /* Project Summary */</p>
<hr />
<div>== Project Summary ==<br />
written by Jason Sadural (jsadural@ccrma.stanford.edu)<br />
comments and suggestions always welcomed<br />
<br />
Our purpose is to introduce Tennyson High School Multimedia & Journalism Department students concepts of open source tools into their current projects. Current projects include radio screen plays, weekly news reports, educational videos, music composition, and virtual year books. The Multimedia & Journalism department at Tennyson highschool has existed for 3 years and currently is the most prospering department academically and creatively at Tennyson. Many of the topics discussed will be releated to the design and execution of Astro-physics educational videos I have created. Topics will include dsp effects, localization, video editing software, graphical rendering, and design.<br />
<br />
== Current equipment ==<br />
<br />
#* G4 Desktop<br />
#* Digital Camcorder<br />
#* 3 shure microphones<br />
#* unfinished</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=Astro-Sonification&diff=1673Astro-Sonification2007-01-21T03:48:38Z<p>Jsadural: /* Data Transposition */</p>
<hr />
<div><br />
== Intro ==<br />
written by Jason Sadural (jsadural@ccrma.stanford.edu)<br />
comments and suggestions always welcomed<br />
<br />
Our purpose is to explore the realm of computational physics and physical tendancies through acoustical perception. In our inate desire to explore who we are and where we come from, physicists have explored the cosmos as far as light and the age of the universe will allow for answers. From data gathered at the furthest spectrum of the universe such as the [http://nobelprize.org/nobel_prizes/physics/laureates/2006/ Cosmic Microwave Background] to relatively closer phenomena, a standard model is devised in order for us to understand what we observe and why. Through [http://en.wikipedia.org/wiki/Astrophysics#Observational_astrophysics observational Astro-physics], data from telescopes and satalites are systematically collected to confirm and converge to coefficients in our physical models as well as tendancies certian systems. Computational Astrophysicists then recreate and simulate these systems in order to test the stability and consistency of these models. Within this process, we attempt to create meaningfull sonification techniques of these simulations in 3-dimensional spatialized sound in order to better understand the physical tendancies not easily seen with current Visualization techniques. <br />
<br />
[[Image:TimeCone.jpg]]<br />
<br />
== Computational Astrophysics ==<br />
=== What is Computational Astrophysics? ===<br />
<br />
Computational astrophysics is the simulation of astrophysical phenomena on a computer by numerical integration of the relevant governing equations. Such simulations produce detailed solutions to highly complex problems in stellar evolution, galactic dynamics, numerical cosmology, and many other fields.<br />
<br />
=== Which simulations? ===<br />
<br />
The simulation data we are going to use is from Laboratory for [http://cosmos.ucsd.edu/ Laboratory for Computational Astrophysics] at UCSD directed by [http://cosmos.ucsd.edu/~mnorman/ Michael Norman]. Professor Norman was one of the first innovators to apply computational physics to simulate [http://www.sciencedaily.com/releases/2001/11/011116065005.htm star formation] from evolving over time the initail conditions of our universe. Our goal is to develop meaningful sonification techniques and sonify phenomena through data mapping of simulation experiments done by his graduate student [http://lca2.ucsd.edu/~dcollins/Research/ Dave Collins]. Later we hope to be able to adapt this technique to other simulations fundamental to our understanding of physics. <br />
<br />
Links to simulations we hope to Sonify:<br />
<br />
* http://lca2.ucsd.edu/~dcollins/MagDensityJPGs.tar (Dave Collins current data)<br />
* http://cosmos.ucsd.edu/~mnorman/movies/siggraph01-amr.mpeg (Mike Norman's Simulation)<br />
* http://cosmos.ucsd.edu/~mnorman/movies/normanDFest864.mpeg (Mike Norman's Simulation)<br />
<br />
Software currently used to create simulation data:<br />
<br />
*ZEUS-2D: A one- or two-dimensional explicit Eulerian grid code for astrophysical radiation [http://en.wikipedia.org/wiki/Magnetohydrodynamics magnetohydrodynamics].<br />
<br />
*ZEUS-3D: A one-, two- or three-dimensional explicit Eulerian grid code for astrophysical magnetohydrodynamics.<br />
<br />
*ZEUS-MP: A parallel, three-dimensional explicit Eulerian grid code for astrophysical magnetohydrodynamics.<br />
<br />
*TITAN: A one-dimensional implicit adaptive mesh code for radiation hydrodynamics.<br />
<br />
*[KRONOS]: A three-dimensional grid-based hydrodynamics cosmology code combining the piecewise parabolic method (PPM) with the particle-mesh (PM) algorithm for collisionless particles.<br />
<br />
*Enzo: an adaptive mesh refinement (AMR), grid-based hybrid code (hydro + N-Body) which is designed to do simulations of cosmological structure formation.<br />
<br />
*[MGMPI]: A parallel, multigrid linear system solver for second order PDEs.<br />
<br />
*4D2: An interactive tool for visualizing and animating 3D data (array, particle) on Silicon Graphics workstations.<br />
<br />
*[LCA Vision]: The portable rewrite of 4D2 including support for adaptive mesh refinement data. <br />
<br />
<br />
Another notable project that has many simulations we hope to sonify is the [http://www.astro.princeton.edu/~jstone/athena.html Athena Project]. This project has many [http://www.astro.princeton.edu/~jstone/tests/index.html simulations] that are fundamental to understanding physics in our universe.<br />
<br />
== Computer Music ==<br />
<br />
=== Sonification Technique ===<br />
<br />
The software we will be using for sonification playback is [http://www-crca.ucsd.edu/~msp/software.html Pure Data] (PD) created by Miller Puckette, currently Associate Director of the [http://crca.ucsd.edu/ Center for Research in Computing and the Arts]. The core of the research will be conducted and developed in the [http://cm-wiki.stanford.edu/wiki/Listening_Room_Specs#Welcom_the_the_Pit_Page listening room] at [http://ccrma.stanford.edu/ CCRMA]. The current approach is to create a virtual environment where the listener is in a position or along a path in the simulation and in real-time be able to explore the space sonically. Many projects in parallel will be developed with the intention of sonifying astro-physical data.<br />
<br />
Links to projects:<br />
<br />
[http://cm-wiki.stanford.edu/wiki/Spatial_layers#Project_Summary Spatial layers]<br />
<br />
[http://cm-wiki.stanford.edu/wiki/Gloves_of_Shaolin Gloves of Shaolin]<br />
<br />
=== Data Transposition ===<br />
<br />
In the first approach in linking simulation data with usable data to current computer music tools, we will be using a 2D slice of the 3D simulation. The 2d slice in jpg format will be analyzed by [http://ccrma.stanford.edu/~woony/works/raster/ Raster Scanning] developed by [http://ccrma.stanford.edu/~woony/ Woon Seung Yeo]. Once implemented this process though helpfull will prove time consuming and cpu intensive. Script is currently being developed in conjunction with Dave Collins and myself in order to extract 3D data directly and efficiently from the current simulation data interpreter. This data will be used to simulate a constantly changing inhomogeneous [http://scienceworld.wolfram.com/physics/Ether.html ether] in virtual space representative of various types of physical densities one would encounter in space. The user can then purturb this virtual space using tools developed in parallel.</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=Astro-Sonification&diff=1672Astro-Sonification2007-01-21T03:47:09Z<p>Jsadural: /* Data Transposition */</p>
<hr />
<div><br />
== Intro ==<br />
written by Jason Sadural (jsadural@ccrma.stanford.edu)<br />
comments and suggestions always welcomed<br />
<br />
Our purpose is to explore the realm of computational physics and physical tendancies through acoustical perception. In our inate desire to explore who we are and where we come from, physicists have explored the cosmos as far as light and the age of the universe will allow for answers. From data gathered at the furthest spectrum of the universe such as the [http://nobelprize.org/nobel_prizes/physics/laureates/2006/ Cosmic Microwave Background] to relatively closer phenomena, a standard model is devised in order for us to understand what we observe and why. Through [http://en.wikipedia.org/wiki/Astrophysics#Observational_astrophysics observational Astro-physics], data from telescopes and satalites are systematically collected to confirm and converge to coefficients in our physical models as well as tendancies certian systems. Computational Astrophysicists then recreate and simulate these systems in order to test the stability and consistency of these models. Within this process, we attempt to create meaningfull sonification techniques of these simulations in 3-dimensional spatialized sound in order to better understand the physical tendancies not easily seen with current Visualization techniques. <br />
<br />
[[Image:TimeCone.jpg]]<br />
<br />
== Computational Astrophysics ==<br />
=== What is Computational Astrophysics? ===<br />
<br />
Computational astrophysics is the simulation of astrophysical phenomena on a computer by numerical integration of the relevant governing equations. Such simulations produce detailed solutions to highly complex problems in stellar evolution, galactic dynamics, numerical cosmology, and many other fields.<br />
<br />
=== Which simulations? ===<br />
<br />
The simulation data we are going to use is from Laboratory for [http://cosmos.ucsd.edu/ Laboratory for Computational Astrophysics] at UCSD directed by [http://cosmos.ucsd.edu/~mnorman/ Michael Norman]. Professor Norman was one of the first innovators to apply computational physics to simulate [http://www.sciencedaily.com/releases/2001/11/011116065005.htm star formation] from evolving over time the initail conditions of our universe. Our goal is to develop meaningful sonification techniques and sonify phenomena through data mapping of simulation experiments done by his graduate student [http://lca2.ucsd.edu/~dcollins/Research/ Dave Collins]. Later we hope to be able to adapt this technique to other simulations fundamental to our understanding of physics. <br />
<br />
Links to simulations we hope to Sonify:<br />
<br />
* http://lca2.ucsd.edu/~dcollins/MagDensityJPGs.tar (Dave Collins current data)<br />
* http://cosmos.ucsd.edu/~mnorman/movies/siggraph01-amr.mpeg (Mike Norman's Simulation)<br />
* http://cosmos.ucsd.edu/~mnorman/movies/normanDFest864.mpeg (Mike Norman's Simulation)<br />
<br />
Software currently used to create simulation data:<br />
<br />
*ZEUS-2D: A one- or two-dimensional explicit Eulerian grid code for astrophysical radiation [http://en.wikipedia.org/wiki/Magnetohydrodynamics magnetohydrodynamics].<br />
<br />
*ZEUS-3D: A one-, two- or three-dimensional explicit Eulerian grid code for astrophysical magnetohydrodynamics.<br />
<br />
*ZEUS-MP: A parallel, three-dimensional explicit Eulerian grid code for astrophysical magnetohydrodynamics.<br />
<br />
*TITAN: A one-dimensional implicit adaptive mesh code for radiation hydrodynamics.<br />
<br />
*[KRONOS]: A three-dimensional grid-based hydrodynamics cosmology code combining the piecewise parabolic method (PPM) with the particle-mesh (PM) algorithm for collisionless particles.<br />
<br />
*Enzo: an adaptive mesh refinement (AMR), grid-based hybrid code (hydro + N-Body) which is designed to do simulations of cosmological structure formation.<br />
<br />
*[MGMPI]: A parallel, multigrid linear system solver for second order PDEs.<br />
<br />
*4D2: An interactive tool for visualizing and animating 3D data (array, particle) on Silicon Graphics workstations.<br />
<br />
*[LCA Vision]: The portable rewrite of 4D2 including support for adaptive mesh refinement data. <br />
<br />
<br />
Another notable project that has many simulations we hope to sonify is the [http://www.astro.princeton.edu/~jstone/athena.html Athena Project]. This project has many [http://www.astro.princeton.edu/~jstone/tests/index.html simulations] that are fundamental to understanding physics in our universe.<br />
<br />
== Computer Music ==<br />
<br />
=== Sonification Technique ===<br />
<br />
The software we will be using for sonification playback is [http://www-crca.ucsd.edu/~msp/software.html Pure Data] (PD) created by Miller Puckette, currently Associate Director of the [http://crca.ucsd.edu/ Center for Research in Computing and the Arts]. The core of the research will be conducted and developed in the [http://cm-wiki.stanford.edu/wiki/Listening_Room_Specs#Welcom_the_the_Pit_Page listening room] at [http://ccrma.stanford.edu/ CCRMA]. The current approach is to create a virtual environment where the listener is in a position or along a path in the simulation and in real-time be able to explore the space sonically. Many projects in parallel will be developed with the intention of sonifying astro-physical data.<br />
<br />
Links to projects:<br />
<br />
[http://cm-wiki.stanford.edu/wiki/Spatial_layers#Project_Summary Spatial layers]<br />
<br />
[http://cm-wiki.stanford.edu/wiki/Gloves_of_Shaolin Gloves of Shaolin]<br />
<br />
=== Data Transposition ===<br />
<br />
In the first approach in linking simulation data with usable data to current computer music tools, we will be using a 2D slice of the 3D simulation. The 2d slice in jpg format will be analyzed by [http://ccrma.stanford.edu/~woony/works/raster/ Raster Scanning] developed by [http://ccrma.stanford.edu/~woony/ Woon Seung Yeo]. This process, once implemented though helpfull will prove to be time consuming. Script is currently being developed in conjunction with Dave Collins and myself in order to extract 3D data directly and efficiently from the current simulation data interpreter. This data will be used to simulate a constantly changing inhomogeneous [http://scienceworld.wolfram.com/physics/Ether.html ether] in virtual space representative of various types of physical densities one would encounter in space. The user can then purturb this virtual space using tools developed in parallel.</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=Astro-Sonification&diff=1671Astro-Sonification2007-01-21T03:46:46Z<p>Jsadural: /* Data Transposition */</p>
<hr />
<div><br />
== Intro ==<br />
written by Jason Sadural (jsadural@ccrma.stanford.edu)<br />
comments and suggestions always welcomed<br />
<br />
Our purpose is to explore the realm of computational physics and physical tendancies through acoustical perception. In our inate desire to explore who we are and where we come from, physicists have explored the cosmos as far as light and the age of the universe will allow for answers. From data gathered at the furthest spectrum of the universe such as the [http://nobelprize.org/nobel_prizes/physics/laureates/2006/ Cosmic Microwave Background] to relatively closer phenomena, a standard model is devised in order for us to understand what we observe and why. Through [http://en.wikipedia.org/wiki/Astrophysics#Observational_astrophysics observational Astro-physics], data from telescopes and satalites are systematically collected to confirm and converge to coefficients in our physical models as well as tendancies certian systems. Computational Astrophysicists then recreate and simulate these systems in order to test the stability and consistency of these models. Within this process, we attempt to create meaningfull sonification techniques of these simulations in 3-dimensional spatialized sound in order to better understand the physical tendancies not easily seen with current Visualization techniques. <br />
<br />
[[Image:TimeCone.jpg]]<br />
<br />
== Computational Astrophysics ==<br />
=== What is Computational Astrophysics? ===<br />
<br />
Computational astrophysics is the simulation of astrophysical phenomena on a computer by numerical integration of the relevant governing equations. Such simulations produce detailed solutions to highly complex problems in stellar evolution, galactic dynamics, numerical cosmology, and many other fields.<br />
<br />
=== Which simulations? ===<br />
<br />
The simulation data we are going to use is from Laboratory for [http://cosmos.ucsd.edu/ Laboratory for Computational Astrophysics] at UCSD directed by [http://cosmos.ucsd.edu/~mnorman/ Michael Norman]. Professor Norman was one of the first innovators to apply computational physics to simulate [http://www.sciencedaily.com/releases/2001/11/011116065005.htm star formation] from evolving over time the initail conditions of our universe. Our goal is to develop meaningful sonification techniques and sonify phenomena through data mapping of simulation experiments done by his graduate student [http://lca2.ucsd.edu/~dcollins/Research/ Dave Collins]. Later we hope to be able to adapt this technique to other simulations fundamental to our understanding of physics. <br />
<br />
Links to simulations we hope to Sonify:<br />
<br />
* http://lca2.ucsd.edu/~dcollins/MagDensityJPGs.tar (Dave Collins current data)<br />
* http://cosmos.ucsd.edu/~mnorman/movies/siggraph01-amr.mpeg (Mike Norman's Simulation)<br />
* http://cosmos.ucsd.edu/~mnorman/movies/normanDFest864.mpeg (Mike Norman's Simulation)<br />
<br />
Software currently used to create simulation data:<br />
<br />
*ZEUS-2D: A one- or two-dimensional explicit Eulerian grid code for astrophysical radiation [http://en.wikipedia.org/wiki/Magnetohydrodynamics magnetohydrodynamics].<br />
<br />
*ZEUS-3D: A one-, two- or three-dimensional explicit Eulerian grid code for astrophysical magnetohydrodynamics.<br />
<br />
*ZEUS-MP: A parallel, three-dimensional explicit Eulerian grid code for astrophysical magnetohydrodynamics.<br />
<br />
*TITAN: A one-dimensional implicit adaptive mesh code for radiation hydrodynamics.<br />
<br />
*[KRONOS]: A three-dimensional grid-based hydrodynamics cosmology code combining the piecewise parabolic method (PPM) with the particle-mesh (PM) algorithm for collisionless particles.<br />
<br />
*Enzo: an adaptive mesh refinement (AMR), grid-based hybrid code (hydro + N-Body) which is designed to do simulations of cosmological structure formation.<br />
<br />
*[MGMPI]: A parallel, multigrid linear system solver for second order PDEs.<br />
<br />
*4D2: An interactive tool for visualizing and animating 3D data (array, particle) on Silicon Graphics workstations.<br />
<br />
*[LCA Vision]: The portable rewrite of 4D2 including support for adaptive mesh refinement data. <br />
<br />
<br />
Another notable project that has many simulations we hope to sonify is the [http://www.astro.princeton.edu/~jstone/athena.html Athena Project]. This project has many [http://www.astro.princeton.edu/~jstone/tests/index.html simulations] that are fundamental to understanding physics in our universe.<br />
<br />
== Computer Music ==<br />
<br />
=== Sonification Technique ===<br />
<br />
The software we will be using for sonification playback is [http://www-crca.ucsd.edu/~msp/software.html Pure Data] (PD) created by Miller Puckette, currently Associate Director of the [http://crca.ucsd.edu/ Center for Research in Computing and the Arts]. The core of the research will be conducted and developed in the [http://cm-wiki.stanford.edu/wiki/Listening_Room_Specs#Welcom_the_the_Pit_Page listening room] at [http://ccrma.stanford.edu/ CCRMA]. The current approach is to create a virtual environment where the listener is in a position or along a path in the simulation and in real-time be able to explore the space sonically. Many projects in parallel will be developed with the intention of sonifying astro-physical data.<br />
<br />
Links to projects:<br />
<br />
[http://cm-wiki.stanford.edu/wiki/Spatial_layers#Project_Summary Spatial layers]<br />
<br />
[http://cm-wiki.stanford.edu/wiki/Gloves_of_Shaolin Gloves of Shaolin]<br />
<br />
=== Data Transposition ===<br />
<br />
In the first approach in linking simulation data with usable data to current computer music tools, we will be using a 2D slice of the 3D simulation. The 2d slice in jpg format will be analyzed by [http://ccrma.stanford.edu/~woony/works/raster/ Raster Scanning] developed by [http://ccrma.stanford.edu/~woony/ Woon Seung Yeo]. This process once implemented though helpfull will prove to be time consuming. Script is currently being developed in conjunction with Dave Collins and myself in order to extract 3D data directly and efficiently from the current simulation data interpreter. This data will be used to simulate a constantly changing inhomogeneous [http://scienceworld.wolfram.com/physics/Ether.html ether] in virtual space representative of various types of physical densities one would encounter in space. The user can then purturb this virtual space using tools developed in parallel.</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=Astro-Sonification&diff=1670Astro-Sonification2007-01-21T03:30:46Z<p>Jsadural: /* Data Transposition */</p>
<hr />
<div><br />
== Intro ==<br />
written by Jason Sadural (jsadural@ccrma.stanford.edu)<br />
comments and suggestions always welcomed<br />
<br />
Our purpose is to explore the realm of computational physics and physical tendancies through acoustical perception. In our inate desire to explore who we are and where we come from, physicists have explored the cosmos as far as light and the age of the universe will allow for answers. From data gathered at the furthest spectrum of the universe such as the [http://nobelprize.org/nobel_prizes/physics/laureates/2006/ Cosmic Microwave Background] to relatively closer phenomena, a standard model is devised in order for us to understand what we observe and why. Through [http://en.wikipedia.org/wiki/Astrophysics#Observational_astrophysics observational Astro-physics], data from telescopes and satalites are systematically collected to confirm and converge to coefficients in our physical models as well as tendancies certian systems. Computational Astrophysicists then recreate and simulate these systems in order to test the stability and consistency of these models. Within this process, we attempt to create meaningfull sonification techniques of these simulations in 3-dimensional spatialized sound in order to better understand the physical tendancies not easily seen with current Visualization techniques. <br />
<br />
[[Image:TimeCone.jpg]]<br />
<br />
== Computational Astrophysics ==<br />
=== What is Computational Astrophysics? ===<br />
<br />
Computational astrophysics is the simulation of astrophysical phenomena on a computer by numerical integration of the relevant governing equations. Such simulations produce detailed solutions to highly complex problems in stellar evolution, galactic dynamics, numerical cosmology, and many other fields.<br />
<br />
=== Which simulations? ===<br />
<br />
The simulation data we are going to use is from Laboratory for [http://cosmos.ucsd.edu/ Laboratory for Computational Astrophysics] at UCSD directed by [http://cosmos.ucsd.edu/~mnorman/ Michael Norman]. Professor Norman was one of the first innovators to apply computational physics to simulate [http://www.sciencedaily.com/releases/2001/11/011116065005.htm star formation] from evolving over time the initail conditions of our universe. Our goal is to develop meaningful sonification techniques and sonify phenomena through data mapping of simulation experiments done by his graduate student [http://lca2.ucsd.edu/~dcollins/Research/ Dave Collins]. Later we hope to be able to adapt this technique to other simulations fundamental to our understanding of physics. <br />
<br />
Links to simulations we hope to Sonify:<br />
<br />
* http://lca2.ucsd.edu/~dcollins/MagDensityJPGs.tar (Dave Collins current data)<br />
* http://cosmos.ucsd.edu/~mnorman/movies/siggraph01-amr.mpeg (Mike Norman's Simulation)<br />
* http://cosmos.ucsd.edu/~mnorman/movies/normanDFest864.mpeg (Mike Norman's Simulation)<br />
<br />
Software currently used to create simulation data:<br />
<br />
*ZEUS-2D: A one- or two-dimensional explicit Eulerian grid code for astrophysical radiation [http://en.wikipedia.org/wiki/Magnetohydrodynamics magnetohydrodynamics].<br />
<br />
*ZEUS-3D: A one-, two- or three-dimensional explicit Eulerian grid code for astrophysical magnetohydrodynamics.<br />
<br />
*ZEUS-MP: A parallel, three-dimensional explicit Eulerian grid code for astrophysical magnetohydrodynamics.<br />
<br />
*TITAN: A one-dimensional implicit adaptive mesh code for radiation hydrodynamics.<br />
<br />
*[KRONOS]: A three-dimensional grid-based hydrodynamics cosmology code combining the piecewise parabolic method (PPM) with the particle-mesh (PM) algorithm for collisionless particles.<br />
<br />
*Enzo: an adaptive mesh refinement (AMR), grid-based hybrid code (hydro + N-Body) which is designed to do simulations of cosmological structure formation.<br />
<br />
*[MGMPI]: A parallel, multigrid linear system solver for second order PDEs.<br />
<br />
*4D2: An interactive tool for visualizing and animating 3D data (array, particle) on Silicon Graphics workstations.<br />
<br />
*[LCA Vision]: The portable rewrite of 4D2 including support for adaptive mesh refinement data. <br />
<br />
<br />
Another notable project that has many simulations we hope to sonify is the [http://www.astro.princeton.edu/~jstone/athena.html Athena Project]. This project has many [http://www.astro.princeton.edu/~jstone/tests/index.html simulations] that are fundamental to understanding physics in our universe.<br />
<br />
== Computer Music ==<br />
<br />
=== Sonification Technique ===<br />
<br />
The software we will be using for sonification playback is [http://www-crca.ucsd.edu/~msp/software.html Pure Data] (PD) created by Miller Puckette, currently Associate Director of the [http://crca.ucsd.edu/ Center for Research in Computing and the Arts]. The core of the research will be conducted and developed in the [http://cm-wiki.stanford.edu/wiki/Listening_Room_Specs#Welcom_the_the_Pit_Page listening room] at [http://ccrma.stanford.edu/ CCRMA]. The current approach is to create a virtual environment where the listener is in a position or along a path in the simulation and in real-time be able to explore the space sonically. Many projects in parallel will be developed with the intention of sonifying astro-physical data.<br />
<br />
Links to projects:<br />
<br />
[http://cm-wiki.stanford.edu/wiki/Spatial_layers#Project_Summary Spatial layers]<br />
<br />
[http://cm-wiki.stanford.edu/wiki/Gloves_of_Shaolin Gloves of Shaolin]<br />
<br />
=== Data Transposition ===<br />
<br />
The first approach in linking simulation data with usable data to current computer music tools will be with a 2d slice of the 3 dimensional simulation. A 2d slice will analyzed with it</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=Astro-Sonification&diff=1669Astro-Sonification2007-01-21T03:26:57Z<p>Jsadural: /* Sonification Technique */</p>
<hr />
<div><br />
== Intro ==<br />
written by Jason Sadural (jsadural@ccrma.stanford.edu)<br />
comments and suggestions always welcomed<br />
<br />
Our purpose is to explore the realm of computational physics and physical tendancies through acoustical perception. In our inate desire to explore who we are and where we come from, physicists have explored the cosmos as far as light and the age of the universe will allow for answers. From data gathered at the furthest spectrum of the universe such as the [http://nobelprize.org/nobel_prizes/physics/laureates/2006/ Cosmic Microwave Background] to relatively closer phenomena, a standard model is devised in order for us to understand what we observe and why. Through [http://en.wikipedia.org/wiki/Astrophysics#Observational_astrophysics observational Astro-physics], data from telescopes and satalites are systematically collected to confirm and converge to coefficients in our physical models as well as tendancies certian systems. Computational Astrophysicists then recreate and simulate these systems in order to test the stability and consistency of these models. Within this process, we attempt to create meaningfull sonification techniques of these simulations in 3-dimensional spatialized sound in order to better understand the physical tendancies not easily seen with current Visualization techniques. <br />
<br />
[[Image:TimeCone.jpg]]<br />
<br />
== Computational Astrophysics ==<br />
=== What is Computational Astrophysics? ===<br />
<br />
Computational astrophysics is the simulation of astrophysical phenomena on a computer by numerical integration of the relevant governing equations. Such simulations produce detailed solutions to highly complex problems in stellar evolution, galactic dynamics, numerical cosmology, and many other fields.<br />
<br />
=== Which simulations? ===<br />
<br />
The simulation data we are going to use is from Laboratory for [http://cosmos.ucsd.edu/ Laboratory for Computational Astrophysics] at UCSD directed by [http://cosmos.ucsd.edu/~mnorman/ Michael Norman]. Professor Norman was one of the first innovators to apply computational physics to simulate [http://www.sciencedaily.com/releases/2001/11/011116065005.htm star formation] from evolving over time the initail conditions of our universe. Our goal is to develop meaningful sonification techniques and sonify phenomena through data mapping of simulation experiments done by his graduate student [http://lca2.ucsd.edu/~dcollins/Research/ Dave Collins]. Later we hope to be able to adapt this technique to other simulations fundamental to our understanding of physics. <br />
<br />
Links to simulations we hope to Sonify:<br />
<br />
* http://lca2.ucsd.edu/~dcollins/MagDensityJPGs.tar (Dave Collins current data)<br />
* http://cosmos.ucsd.edu/~mnorman/movies/siggraph01-amr.mpeg (Mike Norman's Simulation)<br />
* http://cosmos.ucsd.edu/~mnorman/movies/normanDFest864.mpeg (Mike Norman's Simulation)<br />
<br />
Software currently used to create simulation data:<br />
<br />
*ZEUS-2D: A one- or two-dimensional explicit Eulerian grid code for astrophysical radiation [http://en.wikipedia.org/wiki/Magnetohydrodynamics magnetohydrodynamics].<br />
<br />
*ZEUS-3D: A one-, two- or three-dimensional explicit Eulerian grid code for astrophysical magnetohydrodynamics.<br />
<br />
*ZEUS-MP: A parallel, three-dimensional explicit Eulerian grid code for astrophysical magnetohydrodynamics.<br />
<br />
*TITAN: A one-dimensional implicit adaptive mesh code for radiation hydrodynamics.<br />
<br />
*[KRONOS]: A three-dimensional grid-based hydrodynamics cosmology code combining the piecewise parabolic method (PPM) with the particle-mesh (PM) algorithm for collisionless particles.<br />
<br />
*Enzo: an adaptive mesh refinement (AMR), grid-based hybrid code (hydro + N-Body) which is designed to do simulations of cosmological structure formation.<br />
<br />
*[MGMPI]: A parallel, multigrid linear system solver for second order PDEs.<br />
<br />
*4D2: An interactive tool for visualizing and animating 3D data (array, particle) on Silicon Graphics workstations.<br />
<br />
*[LCA Vision]: The portable rewrite of 4D2 including support for adaptive mesh refinement data. <br />
<br />
<br />
Another notable project that has many simulations we hope to sonify is the [http://www.astro.princeton.edu/~jstone/athena.html Athena Project]. This project has many [http://www.astro.princeton.edu/~jstone/tests/index.html simulations] that are fundamental to understanding physics in our universe.<br />
<br />
== Computer Music ==<br />
<br />
=== Sonification Technique ===<br />
<br />
The software we will be using for sonification playback is [http://www-crca.ucsd.edu/~msp/software.html Pure Data] (PD) created by Miller Puckette, currently Associate Director of the [http://crca.ucsd.edu/ Center for Research in Computing and the Arts]. The core of the research will be conducted and developed in the [http://cm-wiki.stanford.edu/wiki/Listening_Room_Specs#Welcom_the_the_Pit_Page listening room] at [http://ccrma.stanford.edu/ CCRMA]. The current approach is to create a virtual environment where the listener is in a position or along a path in the simulation and in real-time be able to explore the space sonically. Many projects in parallel will be developed with the intention of sonifying astro-physical data.<br />
<br />
Links to projects:<br />
<br />
[http://cm-wiki.stanford.edu/wiki/Spatial_layers#Project_Summary Spatial layers]<br />
<br />
[http://cm-wiki.stanford.edu/wiki/Gloves_of_Shaolin Gloves of Shaolin]<br />
<br />
=== Data Transposition ===<br />
<br />
Dave Collins will write C-code to a 3-dimensional array data set.<br />
<br />
We will use d</div>Jsaduralhttps://ccrma.stanford.edu/mediawiki/index.php?title=Astro-Sonification&diff=1668Astro-Sonification2007-01-21T03:21:41Z<p>Jsadural: /* Sonificatiion Technique */</p>
<hr />
<div><br />
== Intro ==<br />
written by Jason Sadural (jsadural@ccrma.stanford.edu)<br />
comments and suggestions always welcomed<br />
<br />
Our purpose is to explore the realm of computational physics and physical tendancies through acoustical perception. In our inate desire to explore who we are and where we come from, physicists have explored the cosmos as far as light and the age of the universe will allow for answers. From data gathered at the furthest spectrum of the universe such as the [http://nobelprize.org/nobel_prizes/physics/laureates/2006/ Cosmic Microwave Background] to relatively closer phenomena, a standard model is devised in order for us to understand what we observe and why. Through [http://en.wikipedia.org/wiki/Astrophysics#Observational_astrophysics observational Astro-physics], data from telescopes and satalites are systematically collected to confirm and converge to coefficients in our physical models as well as tendancies certian systems. Computational Astrophysicists then recreate and simulate these systems in order to test the stability and consistency of these models. Within this process, we attempt to create meaningfull sonification techniques of these simulations in 3-dimensional spatialized sound in order to better understand the physical tendancies not easily seen with current Visualization techniques. <br />
<br />
[[Image:TimeCone.jpg]]<br />
<br />
== Computational Astrophysics ==<br />
=== What is Computational Astrophysics? ===<br />
<br />
Computational astrophysics is the simulation of astrophysical phenomena on a computer by numerical integration of the relevant governing equations. Such simulations produce detailed solutions to highly complex problems in stellar evolution, galactic dynamics, numerical cosmology, and many other fields.<br />
<br />
=== Which simulations? ===<br />
<br />
The simulation data we are going to use is from Laboratory for [http://cosmos.ucsd.edu/ Laboratory for Computational Astrophysics] at UCSD directed by [http://cosmos.ucsd.edu/~mnorman/ Michael Norman]. Professor Norman was one of the first innovators to apply computational physics to simulate [http://www.sciencedaily.com/releases/2001/11/011116065005.htm star formation] from evolving over time the initail conditions of our universe. Our goal is to develop meaningful sonification techniques and sonify phenomena through data mapping of simulation experiments done by his graduate student [http://lca2.ucsd.edu/~dcollins/Research/ Dave Collins]. Later we hope to be able to adapt this technique to other simulations fundamental to our understanding of physics. <br />
<br />
Links to simulations we hope to Sonify:<br />
<br />
* http://lca2.ucsd.edu/~dcollins/MagDensityJPGs.tar (Dave Collins current data)<br />
* http://cosmos.ucsd.edu/~mnorman/movies/siggraph01-amr.mpeg (Mike Norman's Simulation)<br />
* http://cosmos.ucsd.edu/~mnorman/movies/normanDFest864.mpeg (Mike Norman's Simulation)<br />
<br />
Software currently used to create simulation data:<br />
<br />
*ZEUS-2D: A one- or two-dimensional explicit Eulerian grid code for astrophysical radiation [http://en.wikipedia.org/wiki/Magnetohydrodynamics magnetohydrodynamics].<br />
<br />
*ZEUS-3D: A one-, two- or three-dimensional explicit Eulerian grid code for astrophysical magnetohydrodynamics.<br />
<br />
*ZEUS-MP: A parallel, three-dimensional explicit Eulerian grid code for astrophysical magnetohydrodynamics.<br />
<br />
*TITAN: A one-dimensional implicit adaptive mesh code for radiation hydrodynamics.<br />
<br />
*[KRONOS]: A three-dimensional grid-based hydrodynamics cosmology code combining the piecewise parabolic method (PPM) with the particle-mesh (PM) algorithm for collisionless particles.<br />
<br />
*Enzo: an adaptive mesh refinement (AMR), grid-based hybrid code (hydro + N-Body) which is designed to do simulations of cosmological structure formation.<br />
<br />
*[MGMPI]: A parallel, multigrid linear system solver for second order PDEs.<br />
<br />
*4D2: An interactive tool for visualizing and animating 3D data (array, particle) on Silicon Graphics workstations.<br />
<br />
*[LCA Vision]: The portable rewrite of 4D2 including support for adaptive mesh refinement data. <br />
<br />
<br />
Another notable project that has many simulations we hope to sonify is the [http://www.astro.princeton.edu/~jstone/athena.html Athena Project]. This project has many [http://www.astro.princeton.edu/~jstone/tests/index.html simulations] that are fundamental to understanding physics in our universe.<br />
<br />
== Computer Music ==<br />
<br />
=== Sonification Technique ===<br />
<br />
The software we will be using for sonification playback is [http://www-crca.ucsd.edu/~msp/software.html Pure Data] (PD) created by Miller Puckette, currently Associate Director of the [http://crca.ucsd.edu/ Center for Research in Computing and the Arts]. The core of the research will be conducted and developed in the [http://cm-wiki.stanford.edu/wiki/Listening_Room_Specs#Welcom_the_the_Pit_Page listening room] at [http://ccrma.stanford.edu/ CCRMA]. Many projects in parallel will be developed with the intention of sonifying astro-physical data. <br />
<br />
Links to projects:<br />
<br />
[http://cm-wiki.stanford.edu/wiki/Spatial_layers#Project_Summary Spatial layers]<br />
<br />
[http://cm-wiki.stanford.edu/wiki/Gloves_of_Shaolin Gloves of Shaolin]<br />
<br />
=== Data Transposition ===<br />
<br />
Dave Collins will write C-code to a 3-dimensional array data set.<br />
<br />
We will use d</div>Jsadural