https://ccrma.stanford.edu/mediawiki/api.php?action=feedcontributions&user=Wikimaster&feedformat=atomCCRMA Wiki - User contributions [en]2024-03-29T00:06:10ZUser contributionsMediaWiki 1.24.1https://ccrma.stanford.edu/mediawiki/index.php?title=COVID-19&diff=22366COVID-192020-03-19T21:42:17Z<p>Wikimaster: Undo revision 22365 by Wikimaster (talk)</p>
<hr />
<div>Have at it CCRMA community<br />
<br />
= Information =<br />
<br />
* [https://ccrma.stanford.edu/docs/common/COVID-19.html CCRMA COVID-19 Info Page]<br />
<br />
<br />
= Technical =<br />
<br />
* [https://ccrma.stanford.edu/docs/common/JackTrip.html jacktrip] — high quality, low latency audio over networks<br />
<br />
* [https://www.youtube.com/watch?v=50NoWIiYECA Zoom in "Music Mode" by RAMA Vocal Center]<br />
<br />
= TAing =<br />
* [https://stanford.box.com/s/vahar9jiyddiyfjfl7ee6mlzmr22myuv TA resources from Julie Herndon & Gabriel Ellis]<br />
<br />
<br />
= Virtual Classroom Tips =<br />
* [https://stanford.zoom.us Zoom] (of course)<br />
* [https://www.airserver.com AirServer] for screen-mirroring from tablet to desktop:<br />
** [https://www.airserver.com/Mac Mac]<br />
** [https://www.airserver.com/WindowsDesktop Windows]<br />
* [https://products.office.com/en-us/onenote/digital-note-taking-app?rtc=1 OneNote] for virtual whiteboard<br />
* [https://www.goodreader.com GoodReader] for displaying / annotating/ drawing on PDFs on projected tablet<br />
* [https://piazza.com Piazza] for class announcements and more<br />
* [https://gocanvas.stanford.edu/gate/ Canvas] for managing assignments and exams (which can be online)<br />
* [https://medium.com/@ezra_69528/mac-how-to-output-midi-to-zoom-conference-with-logic-pro-x-28aaec683132 How to "share audio" from Logic Pro over Zoom]<br />
<br />
= Health and Well Being =<br />
<br />
* Nette is holding a daily Zoom "drop-in" hangout from 1pm-2pm M-F; please see Nette's email from 3/19/2020 (subject: "Town Hall Recording") for ways to connect<br />
<br />
* (Errand sharing sheet created by Michiko Theurer - see email)<br />
<br />
* [https://artful.design/tv Artful Design TV (COVID-19 Edition) — a weekly Zoom series]<br />
<br />
* [https://docs.google.com/spreadsheets/u/1/d/13nhPQ9uC9qwHnkCkezP3MI-xRkGTqsfbqDp3hwgFt9c/edit?fbclid=IwAR0M82q_k62RQ_FBYFIDheJ5cAUFiJlufJGW-kNO9aoutljiBJYT8p-u93c#gid=443587257 Stanford- wide community offerings]<br />
<br />
= Grocery Delivery Options =<br />
* [http://instacart.com InstaCart] (four day delay as of March 16)<br />
* [https://www.safeway.com Safeway] (deliveries completely sold out, but you might be able pick up your order if they do not cancel it as mine just was)</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=COVID-19&diff=22365COVID-192020-03-19T21:40:24Z<p>Wikimaster: Reverted some issues</p>
<hr />
<div>Have at it CCRMA community<br />
<br />
This is '''bold'''<br />
<br />
<br />
amazing.<br />
<br />
= Information =<br />
<br />
* [https://ccrma.stanford.edu/docs/common/COVID-19.html CCRMA COVID-19 Info Page]<br />
<br />
<br />
<br />
= Technical =<br />
<br />
* [https://ccrma.stanford.edu/docs/common/JackTrip.html jacktrip]<br />
<br />
* [https://www.youtube.com/watch?v=50NoWIiYECA Zoom in "Music Mode" by RAMA Vocal Center]<br />
<br />
<br />
= TAing =<br />
* [https://stanford.box.com/s/vahar9jiyddiyfjfl7ee6mlzmr22myuv TA resources from Julie Herndon & Gabriel Ellis]<br />
<br />
<br />
<br />
= Health and Wellbeing =</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=COVID-19&diff=22359COVID-192020-03-19T20:44:10Z<p>Wikimaster: Reverted edits by Wikimaster (talk) to last revision by Ge</p>
<hr />
<div>Have at it CCRMA community<br />
<br />
= Information =<br />
<br />
* [https://ccrma.stanford.edu/docs/common/COVID-19.html CCRMA COVID-19 Info Page]<br />
<br />
<br />
= Technical =<br />
<br />
* [https://ccrma.stanford.edu/docs/common/JackTrip.html jacktrip] — high quality, low latency audio over networks<br />
<br />
* [https://www.youtube.com/watch?v=50NoWIiYECA Zoom in "Music Mode" by RAMA Vocal Center]<br />
<br />
<br />
= TAing =<br />
* [https://stanford.box.com/s/vahar9jiyddiyfjfl7ee6mlzmr22myuv TA resources from Julie Herndon & Gabriel Ellis]<br />
<br />
<br />
= Virtual Classroom Tips =<br />
* [https://stanford.zoom.us Zoom] (of course)<br />
* [https://www.airserver.com AirServer] for screen-mirroring from tablet to desktop:<br />
** [https://www.airserver.com/Mac Mac]<br />
** [https://www.airserver.com/WindowsDesktop Windows]<br />
* [https://products.office.com/en-us/onenote/digital-note-taking-app?rtc=1 OneNote] for virtual whiteboard<br />
* [https://www.goodreader.com GoodReader] for displaying / annotating/ drawing on PDFs on projected tablet<br />
* [https://piazza.com Piazza] for class announcements and more<br />
* [https://gocanvas.stanford.edu/gate/ Canvas] for managing assignments and exams (which can be online)<br />
<br />
<br />
= Health and Well Being =<br />
<br />
* Nette is holding a daily Zoom "drop-in" hangout from 1pm-2pm M-F; please see Nette's email from 3/19/2020 (subject: "Town Hall Recording") for ways to connect<br />
<br />
* (Errand sharing sheet created by Michiko Theurer - see email)<br />
<br />
* [https://artful.design/tv Artful Design TV (COVID-19 Edition) — a weekly Zoom series]<br />
<br />
<br />
= Grocery Delivery Options =<br />
* [http://instacart.com InstaCart] (four day delay as of March 16)<br />
* [https://www.safeway.com Safeway] (deliveries completely sold out, but you might be able pick up your order if they do not cancel it as mine just was)</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=COVID-19&diff=22358COVID-192020-03-19T20:43:27Z<p>Wikimaster: Reverted edits by Ge (talk) to last revision by Wikimaster</p>
<hr />
<div>Have at it CCRMA community<br />
<br />
= Information =<br />
<br />
* [https://ccrma.stanford.edu/docs/common/COVID-19.html CCRMA COVID-19 Info Page]<br />
<br />
<br />
<br />
= Technical =<br />
<br />
* [https://ccrma.stanford.edu/docs/common/JackTrip.html jacktrip] — high quality, low latency audio over networks<br />
<br />
* [https://www.youtube.com/watch?v=50NoWIiYECA Zoom in "Music Mode" by RAMA Vocal Center]<br />
<br />
<br />
= TAing =<br />
* [https://stanford.box.com/s/vahar9jiyddiyfjfl7ee6mlzmr22myuv TA resources from Julie Herndon & Gabriel Ellis]<br />
<br />
<br />
= Virtual Classroom Tips =<br />
* [https://stanford.zoom.us Zoom] (of course)<br />
* [https://www.airserver.com AirServer] for screen-mirroring from tablet to desktop:<br />
** [https://www.airserver.com/Mac Mac]<br />
** [https://www.airserver.com/WindowsDesktop Windows]<br />
* [https://products.office.com/en-us/onenote/digital-note-taking-app?rtc=1 OneNote] for virtual whiteboard<br />
* [https://www.goodreader.com GoodReader] for displaying / annotating/ drawing on PDFs on projected tablet<br />
* [https://piazza.com Piazza] for class announcements and more<br />
* [https://gocanvas.stanford.edu/gate/ Canvas] for managing assignments and exams (which can be online)<br />
<br />
<br />
= Health and Well Being =<br />
<br />
* [Errand sharing sheet created by Michiko Theurer - see email]<br />
<br />
* [https://artful.design/tv Artful Design TV (COVID-19 Edition) — a weekly Zoom series]<br />
<br />
<br />
= Grocery Delivery Options =<br />
* [http://instacart.com InstaCart] (four day delay as of March 16)<br />
* [https://www.safeway.com Safeway] (deliveries completely sold out, but you might be able pick up your order if they do not cancel it as mine just was)</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=COVID-19&diff=22354COVID-192020-03-19T20:12:36Z<p>Wikimaster: Reverted edits by Wikimaster (talk) to last revision by Jos</p>
<hr />
<div>Have at it CCRMA community<br />
<br />
= Information =<br />
<br />
* [https://ccrma.stanford.edu/docs/common/COVID-19.html CCRMA COVID-19 Info Page]<br />
<br />
<br />
<br />
= Technical =<br />
<br />
* [https://ccrma.stanford.edu/docs/common/JackTrip.html jacktrip] — high quality, low latency audio over networks<br />
<br />
* [https://www.youtube.com/watch?v=50NoWIiYECA Zoom in "Music Mode" by RAMA Vocal Center]<br />
<br />
<br />
= TAing =<br />
* [https://stanford.box.com/s/vahar9jiyddiyfjfl7ee6mlzmr22myuv TA resources from Julie Herndon & Gabriel Ellis]<br />
<br />
<br />
= Virtual Classroom Tips =<br />
* [https://stanford.zoom.us Zoom] (of course)<br />
* [https://www.airserver.com AirServer] for screen-mirroring from tablet to desktop:<br />
** [https://www.airserver.com/Mac Mac]<br />
** [https://www.airserver.com/WindowsDesktop Windows]<br />
* [https://products.office.com/en-us/onenote/digital-note-taking-app?rtc=1 OneNote] for virtual whiteboard<br />
* [https://www.goodreader.com GoodReader] for displaying / annotating/ drawing on PDFs on projected tablet<br />
* [https://piazza.com Piazza] for class announcements and more<br />
* [https://gocanvas.stanford.edu/gate/ Canvas] for managing assignments and exams (which can be online)<br />
<br />
<br />
= Health and Well Being =<br />
<br />
* [Errand sharing sheet created by Michiko Theurer - see email]<br />
<br />
* [https://artful.design/tv Artful Design TV (COVID-19 Edition) — a weekly Zoom series]<br />
<br />
<br />
= Grocery Delivery Options =<br />
* [http://instacart.com InstaCart] (four day delay as of March 16)<br />
* [https://www.safeway.com Safeway] (deliveries completely sold out, but you might be able pick up your order if they do not cancel it as mine just was)</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=COVID-19&diff=22353COVID-192020-03-19T20:06:09Z<p>Wikimaster: </p>
<hr />
<div>Have at it CCRMA community<br />
<br />
This is '''bold'''<br />
<br />
<br />
amazing.<br />
<br />
= Information =<br />
<br />
* [https://ccrma.stanford.edu/docs/common/COVID-19.html CCRMA COVID-19 Info Page]<br />
<br />
<br />
<br />
= Technical =<br />
<br />
* [https://ccrma.stanford.edu/docs/common/JackTrip.html jacktrip]<br />
<br />
* [https://www.youtube.com/watch?v=50NoWIiYECA Zoom in "Music Mode" by RAMA Vocal Center]<br />
<br />
<br />
= TAing =<br />
* [https://stanford.box.com/s/vahar9jiyddiyfjfl7ee6mlzmr22myuv TA resources from Julie Herndon & Gabriel Ellis]<br />
<br />
<br />
<br />
= Health and Wellbeing =</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Upcoming_music_technology_conferences&diff=2604Upcoming music technology conferences2007-10-02T17:30:50Z<p>Wikimaster: </p>
<hr />
<div>Upcoming music technology conferences and deadlines.<br />
<br />
<br />
Gary Scavone's list<br />
http://www.music.mcgill.ca/~gary/conferences.html<br />
<br />
<br />
<br />
== Paper submission open ==<br />
<br />
'''123rd AES Convention''' http://www.aes.org/events/123/<br />
<br />
Oct 5-8, 2007, New York, NY, USA<br />
<br />
Proposal due '''6/1/2007'''<br />
<br />
<br />
'''Asilomar Conference on Signals, Systems, and Computers''' http://www.asilomarssc.org/<br />
<br />
Nov 4-7, 2007, Pacific Grove, CA<br />
<br />
Abstract and 500 word summary due '''6/1/2007'''<br />
<br />
<br />
'''ICASSP''' http://www.icassp2008.org/<br />
<br />
4 page camera ready papers '''10/5/2007'''<br />
<br />
Mar 30-Apr 4, 2008. Las Vegas, NV<br />
<br />
Notification of Acceptance: December 14, 2007<br />
<br />
<br />
== Submission closed ==<br />
<br />
'''DAFx07 http://dafx.labri.fr/'''<br />
<br />
Sept 10-15, 2007, Bordeaux, France<br />
<br />
Full length paper due '''3/31/2007'''<br />
<br />
Oral presentation: 8 pages, Poster Presentation: 4 pages.<br />
<br />
Final paper due 6/21/2007<br />
<br />
<br />
'''International Congress on Acoustics (ICA)''' http://www.ica2007madrid.org/<br />
<br />
ICA 2007, Madrid, Spain, 2-7 September 2007<br />
<br />
Abstracts due 1 April 2007<br />
<br />
<br />
'''International Symposium on Musical Acoustics (ISMA)''' http://www.ica2007madrid.org/modules.php?name=section&id=9&sec=6<br />
<br />
ISMA 2007, Barcelona, Spain, 9-12 September 2007<br />
<br />
Abstracts due 1 April 2007<br />
<br />
<br />
'''ICMC 2007 http://www.icmc2007.net/'''<br />
<br />
Full paper due '''4/30/2007'''<br />
<br />
Camera ready due '''6/30/2007'''<br />
<br />
Conference dates Aug 27-31, 2007<br />
<br />
<br />
AES http://www.aes.org/events/<br />
<br />
'''32nd International AES Conference on DSP for Loudspeakers,''' <br />
<br />
Sept 21-23, 2007, Hillerød, Denmark<br />
<br />
Full paper submission, '''4/10/2007'''<br />
<br />
<br />
'''WASPAA 2007''' http://www.kecl.ntt.co.jp/icl/signal/waspaa2007/<br />
<br />
2007 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics<br />
<br />
Mohonk Mountain House<br />
<br />
October 21-24, 2007, New Paltz, New York<br />
<br />
4 page paper due '''5/18/2007'''<br />
<br />
<br />
[[Category:General]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Stk2pd&diff=2603Stk2pd2007-10-02T17:29:40Z<p>Wikimaster: </p>
<hr />
<div>stk2pd is a script to convert STK objects to Pd objects<br />
<br />
It is currently tested on:<br />
* PlanetCCRMA Linux machines running FC6<br />
* Intel Macs running OS 10.4.10 and Pd version 0.39.2-extended-rc3<br />
<br />
==Legal==<br />
The code and scripts in this package are provided for free with no warranty. Use at your own risk. The source package includes the rawwaves directory from the STK, which may or may not be covered by a license. The binary Pd objects generated by this package use libstk.a and require a path to the STK include directory, neither of which are included here. See the STK source for licensing and legal information about STK.<br />
<br />
<br />
==Downloads==<br />
Source<br />
* [http://ccrma.stanford.edu/courses/250a/pd/stk2pd-07272007.tar.gz stk2pd-07272007.tar.gz]<br />
<br />
<br />
Pd Externs<br />
* [http://ccrma.stanford.edu/courses/250a/pd/stk2pd-externs-IntelMac-07272007.tar.gz stk2pd-externs-IntelMac-07272007.tar.gz]<br />
* [http://ccrma.stanford.edu/courses/250a/pd/stk2pd-externs-Linux-07272007.tar.gz stk2pd-externs-Linux-07272007.tar.gz]<br />
<br />
==To Compile==<br />
<br />
1. You need the STK includes and libstk.a<br />
You can download and compile it from source from:<br />
[http://ccrma.stanford.edu/software/stk/ http://ccrma.stanford.edu/software/stk/]<br />
<br />
2. Autoconf. Type:<br />
<br />
autoconf<br />
<br />
This will generate a configure file.<br />
<br />
<br />
3. Configure. You probably need to set the pd path and the stk path, and may want to set the installation prefix. You can do this by typing:<br />
<br />
./configure --with-pd-dir=/path/to/pd/include --with-stk-dir=/path/to/stk --prefix=/where/you/put/your/pd/externs<br />
<br />
4. If this is successful, you can type:<br />
<br />
make<br />
<br />
and then<br />
<br />
make install<br />
<br />
This will copy the compiled externs and the (required) stk rawwaves directory to the installation directory, determined by --prefix<br />
<br />
5. Note that to install the help patches, you'll need to copy them manually to where you keep your Pd help patches.<br />
<br />
6. *WARNING* Some of the STK instruments load small sound files into buffers when they are created. These sound files are in the rawwaves directory, and the path to this directory must be set at COMPILE TIME. Because of this, I set the path to rawwaves to be relative to the pd objects. This means that the rawwaves directory needs to be copied along with the compiled pd objects to your destination directory.<br />
<br />
<br />
==How this (hopefully) works==<br />
<br />
* When you type make, the Makefile (generated by configure) calls a shell script called ProcessInstrument. <br />
<br />
* This script generates a .cpp file for each stk instrument in the list defined on first line of the Makefile. The .cpp files are put in the cppfiles directory. <br />
<br />
* To change this list, you can edit the Makefile (or Makefile.in, and then re-run configure). <br />
<br />
* The .cpp files are generated by find and replace, based on the template cpp file called stk2pdTemplate. <br />
<br />
* After each is generated, a Pd extern is compiled into the externs directory.<br />
<br />
* The install: make target stupidly copies the compiled externs and the rawwaves directory to the path set by --prefix.<br />
<br />
<br />
[[Category:CCRMA User Guide]][[Category:Projects]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Spatial_layers&diff=2602Spatial layers2007-10-02T17:28:30Z<p>Wikimaster: </p>
<hr />
<div>== Project Summary ==<br />
written by Jason Sadural (jsadural@ccrma.stanford.edu)<br />
comments and suggestions always welcomed<br />
<br />
Our purpose is to create a methodology to understanding the maximum threshold of observable individual musical tracks that move rythmically in space. This research will eventually lead to the creation of an auditory canvas that we can distort perturb by stretching, bending, or even ripping. The products developed in this experiment is intended for developement of [http://cm-wiki.stanford.edu/wiki/Astro-Sonification#Sonificatiion_Technique Astro-Sonification]. Various psychoacoustically inspired experiments will be conducted and data will be collected through a graphical user interface developed in [http://www-crca.ucsd.edu/~msp/software.html PD] with [http://gem.iem.at/ GEM]. Experiments will primarily be conducted in the [http://cm-wiki.stanford.edu/wiki/Listening_Room_Specs#Welcom_the_the_Pit_Page Listening Room].<br />
<br />
<br />
<br />
==GUI==<br />
<br />
The developed user interface is representative of the actual configuration of the listening room. The gray cubes represent actual speaker location and configuration with respect to the listener in the absolute center of the purple sphere. The solid blue spheres indicate individual output channels which can be moved anywhere in virtual space. The user is able to zoom in/out of the sphere as well as rotate to any perspective. <br />
<br />
==Spatialization==<br />
<br />
The current working driver which enables the sound source to move completely along the sphere is VBAP inspired by [http://www.acoustics.hut.fi/~ville/ Ville Pulkki]. Initial tests and subject response have show VBAP to produce high accuracy with point source localization in accordance with the virtual space. Further implementations involving [http://drpichon.free.fr/pmpd/ physical modeling for PD] have been added to the interface such as a spring. Tests have been done in which 9 point sources attached along a stretchy string move along the sphere in which the user purturbs it in real-time with great results. <br />
<br />
''' ''In Progress'' '''<br />
* Ambisonics<br />
Simultaneous playback of sounds incorporating First Order [http://en.wikipedia.org/wiki/Ambisonics Ambisonics] in order to create localization paths independent of purple sphere. Software used for this implementation will be with [http://www.audiosynth.com/ SuperCollider] and the absolute position data will communicate with PD using [http://en.wikipedia.org/wiki/OpenSound_Control Open Sound Control]. <br />
<br />
* Real-time controllers<br />
A wireless blue-tooth accelerometer based sphere called [http://ccrma.stanford.edu/~woony/works/brbi/ BRBI] created by [http://ccrma.stanford.edu/~woony/ Woon Seung Yeo] will interact with interface over OSC. Gestures such as rotation and shaking will be mapped accordingly with physical models implemented in the interface.<br />
<br />
[http://cm-wiki.stanford.edu/wiki/Gloves_of_Shaolin#Project_Summary The gloves of shaolin] will be a tool in which the user will be able to purturb the auditory canvas and be able to send projectiles in virtual space that would sonically represent inhomogenieties it encouters along it's path.<br />
<br />
* Source path designer<br />
ex. a figure 8 path along the sphere will be able to stretch vertically or horizontally in real time.<br />
<br />
* Spatial cues<br />
Algorithms incorporating doppler shiFt and virtual wall reflections and damping will be incorporated in accordance with psycho-acoustic experimentation design.<br />
<br />
== Psychoacoustic experimentation ==<br />
For canvas design it is necessary to determine several auditory thresholds. One is determining the physical limit a correlated and syncopated sound can be seperated in virtual space with the user still able to identify correlation. Further discusions are needed in order to determine the types of sounds to be used. For the moment we will assume the sound source to be confined to the sphere. Psychoacoustic experimentation design is currently still at it's most preliminary stages. More to come...<br />
<br />
<br />
[[Category:Projects]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Spam_Control_at_CCRMA&diff=2601Spam Control at CCRMA2007-10-02T17:28:00Z<p>Wikimaster: </p>
<hr />
<div>'''This page is not ready, details below may be (and probably are) broken!!'''<br />
<br />
Welcome to CCRMA's Spam fighter homepage.<br />
<br />
Having a 'Spam Free' inbox is a requires vigilence on everyone's part. In the text below, we'll describe what you can do to minimize your Spam.<br />
<br />
First you need to determine which email client you will be using (e.g. Evolution, Thunderbird, WebMail, or Pine). Spam fighting is much more difficult if you use more than one email client. The descriptions below are for exclusive use of '''only''' one client. These solutions also assume that you will '''not''' be using client side 'intelligent email filtering' (where the your email client 'learns' about Junk mail).<br />
<br />
<br />
== .procmailrc File ==<br />
<br />
This file is key in your Spam control effort. It is a hidden or 'dot' file, located in your home directory. You will need to create this file, see below for details on how to do this. .procmailrc gives Sendmail [http://en.wikipedia.org/wiki/Sendmail] instructions on where to route your email once it arrives at CCRMA. The idea is to make Sendmail route email through SpamAssassin [http://spamassassin.apache.org] before it gets to your client inbox. <br />
<br />
SpamAssassin will run each email through its filter (filter rules are updated frequently to reflect new spam 'threats'). SpamAssassin adds several lines to your email header's, including: X-Spam-Level, X-Spam-Checker-Version, X-Spam-Status, and X-Spam-Report. For now, let's concentrate on '''X-Spam-Level''' since it is on this line, that you will create filter's in your email client.<br />
<br />
<pre><br />
# directory where mailboxes are located<br />
# this is the default used by pine<br />
MAILDIR=$HOME<br />
<br />
# pipe the message through spamassassin in cm-home<br />
:0fw<br />
| spamc -d 171.64.197.138<br />
</pre><br />
<br />
At a minumum, this text in the .procmailrc file will direct your mail to SpamAssassin, which will than tag (add lines to) each email header reflecting its likelyhood of being spam (the header line '''X-Spam-Level'''). X-Spam-Level display's Spam Level using the asterisk. For example, for 'Spam Level 15':<br />
<br />
<pre><br />
X-Spam-Level: ***************<br />
</pre><br />
<br />
The idea then, is to establish email filter's, filtering on the asterisk, in your email client which, directing these messages into more manageable folder's or to delete the message automatically (wise for Spam Level 15).<br />
<br />
[[Category:CCRMA User Guide]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Soundwire-fall2007/Equipment&diff=2600Soundwire-fall2007/Equipment2007-10-02T17:27:28Z<p>Wikimaster: </p>
<hr />
<div>=== What each person is bringing, exactly ===<br />
<br />
please indicate (as '''precisely''' as possible) what gear you'll be bring to class. Thanks!<br />
<br />
* Chris C:<br />
* Ge: laptop (need AC), stereo 1/8" -> RCA cable.<br />
* Kyle:<br />
* Nick:<br />
* Turner:<br />
* Hayden:<br />
* Cobi:<br />
* Gina:<br />
* Elise:<br />
* Diana:<br />
* Baek:<br />
* Dennis:<br />
* Max:<br />
* Luke:<br />
* Rob: electric guitar -> 1/4 -> direct-box-eq -> 1/4<br />
* Juan-Pablo:<br />
* Tania:<br />
* Hiroko:<br />
* Adnan:<br />
* Joel:<br />
* Chris W:<br />
* Jeff:<br />
<br />
<br />
[[Category:Courses]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Soundwire-fall-2007&diff=2599Soundwire-fall-20072007-10-02T17:27:01Z<p>Wikimaster: </p>
<hr />
<div>* class [http://ccrma.stanford.edu/groups/soundwire/course/ homepage]<br />
* [[Soundwire-fall2007/People|list of CCRMA folks]]<br />
<br />
=== Next meeting ===<br />
* [[Soundwire-fall2007/Equipment | list of equipment each person plans to bring]]<br />
<br />
[[Category:Courses]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=SGSI07_Music_and_Human_Behavior&diff=2598SGSI07 Music and Human Behavior2007-10-02T17:26:21Z<p>Wikimaster: </p>
<hr />
<div>[http://sgsi.stanford.edu/music/ Stanford Graduate Summer Institute]<br />
<br />
==SGSI Summer course in Musical Behavior==<br />
* [mailto:menon@stanford.edu Vinod Menon]<br />
* [mailto:brg@ccrma.stanford.edu Jonathan Berger]<br />
* Assistants: [mailto:hiroko@ccrma.stanford.edu Hiroko Terasawa], [mailto:shchon@stanford.edu Song-Hui Chon] <br />
* Place: CCRMA, The Knoll. [http://maps.google.com/maps?f=q&hl=en&geocode=&q=660+Lomita+drive+stanford+CA+USA&sll=26.29444,-98.29158&sspn=0.011119,0.016737&ie=UTF8&ll=37.422253,-122.174492&spn=0.00985,0.016737&z=16&iwloc=addr&om=1 Map] <br />
**For concerts, CCRMA stage (3rd Floor.) For lectures, Classroom (2nd Floor.)<br />
<br />
== Guest musicians and speakers ==<br />
* Debra Fong<br />
* Livia Sohn<br />
* St. Lawrece String Quartet<br />
* Malcolm Slaney<br />
* John Chowning<br />
* Gareth Loy<br />
* Paul Kiparsky<br />
<br />
==Schedule==<br />
<br />
===Sunday, 9/16 5:30 p.m. ''Opening Concert''===<br />
----<br />
* Dinner, Concert, Lecture.<br />
* St. Lawrence String Quartet: ''Haydn, String quartet Op. 54.2'', ''Beethoven, String quartet Op. 132.''<br />
* Jonathan Berger: ''Questioning Musical Behavior.''<br />
<br />
===Monday 9/17 ''The Anatomy of Hearing''===<br />
----<br />
* 10:00 a.m. Vinod Menon: ''Brain Organization for Auditory Processing I and II.''<br />
* 12:00 p.m. Lunch Break<br />
* 2:00 p.m. Malcolm Slaney: ''Computational Auditory Models.''<br />
* 3:00 p.m. Menon: ''Functional Brain Imaging.''<br />
* 4:40 p.m. Leave Knoll to Medical Center<br />
* 5:10 p.m. Lesley Robertson: Tour of 3T MRI scanner and Lucas Imaging Center.<br />
<br />
===Tuesday 9/18 ''Learning and Memory''===<br />
----<br />
* 10:00 a.m. Berger and Livia Sohn: ''Largo, Bach C Major Sonata for violin - Performance aspects of attention and memory.''<br />
* 11:00 a.m. Menon: ''Cognitive neuroscience of learning and memory: Implications for music.''<br />
* 12:00 p.m. Lunch Break<br />
* 1:00 p.m. Group project: experiment design. <br />
* 2:00 p.m. John Chowning: ''Perceptual fusion, Gestalt law of common fate, source identification and segregation.''<br />
* 3:00 p.m. Group project: experiment design continued.<br />
* 4:45 p.m. Leave to SF Opera (bus trip)<br />
* 7:00 p.m. SF Opera, Wagner ''Tannhauser''<br />
<br />
===Wednesday 9/19 ''Expectation in Music''===<br />
----<br />
* 10:00 a.m. Berger: ''Haydn Op. 54.2 -- a theory of musical expectations.''<br />
* 11:00 a.m. Menon: ''Cognitive neuroscience of expectation and attention.''<br />
* 12:00 a.m. Lunch Break<br />
* 1:00 p.m. Group project: Implementing the experiment. <br />
* 2:00 p.m. Gareth Loy: ''Information Theory and the Mathematics of Expectation''<br />
* 3:00 p.m. Group project: Implementing the experiment continued.<br />
<br />
===Thursday 9/20 ''Timing and temporal structures''===<br />
----<br />
* 10:00 a.m. Menon: ''Neural basis of temporal structure processing in music.''<br />
* 11:00 a.m. Berger: ''Monophonic polymeter and imbroglio.''<br />
<br />
* 12:00 p.m. Lunch Break<br />
<br />
* 1:00 p.m. Group project: executing the experiment. <br />
* 2:00 p.m. Paul Kaparsky: ''Meter and prosody''<br />
* 3:00 p.m. Group project: executing the experiment continued. <br />
* 4:00 p.m. Chris Costanza and Debra Fong: ''Bach: Sarabande, c minor, cello suite'' and ''Ravel: Sonata for violin and cello.''<br />
<br />
===Friday 9/21 ''Emotion''===<br />
----<br />
* 10:00 a.m. Menon: ''Cognitive neuroscience of emotion in music.''<br />
* 11:00 a.m. Group project presentation. <br />
* 12:00 p.m. Lunch and Concert<br />
* 1:00 p.m. Berger, Menon, Fong, Sohn, and SLSQ: ''Emotion and affect.''<br />
* 2:00 p.m. Menon and Berger: Wrap-up.<br />
<br />
==Readings==<br />
# ''The Neurosciences and Music'' Annals of the New York Academy of Sciences, November 2003 - Vol. 999, Page xi-532. [http://www.blackwell-synergy.com/toc/nyas/999/1 link]<br />
# Peretz I, Zatorre RJ. ''Brain organization for music processing.'' Annu Rev Psychol. 2005;56:89-114. Review. PMID: 15709930 [http://www.ncbi.nlm.nih.gov/sites/entrez?Db=pubmed&Cmd=ShowDetailView&TermToSearch=15709930&ordinalpos=1&itool=EntrezSystem2.PEntrez.Pubmed.Pubmed_ResultsPanel.Pubmed_RVDocSum link]<br />
# Peretz, I. & R. J. Zatorre. 2003. ''The Cognitive Neuroscience of Music.'' Oxford University Press, New York.<br />
# ''The Neurosciences and Music II: From Perception to Performance.'' Annals of the New York Academy of Sciences, December 2005 - Vol. 1060. pp. xi-487. [http://www.blackwell-synergy.com/toc/nyas/1060/1 link] <br />
# Zatorre RJ, Chen JL, Penhune VB. ''When the brain plays music: auditory-motor interactions in music perception and production.'' Nat Rev Neurosci. 2007 Jul;8(7):547-58. PMID: 17585307. [http://www.ncbi.nlm.nih.gov/sites/entrez?Db=pubmed&Cmd=ShowDetailView&TermToSearch=17585307&ordinalpos=2&itool=EntrezSystem2.PEntrez.Pubmed.Pubmed_ResultsPanel.Pubmed_RVDocSum link]<br />
# Stewart L, von Kriegstein K, Warren JD, Griffiths TD. ''Music and the brain: disorders of musical listening.'' Brain. 2006 Oct;129(Pt 10):2533-53. Epub 2006 Jul 15. Review. PMID: 16845129. [http://www.ncbi.nlm.nih.gov/sites/entrez?Db=pubmed&Cmd=ShowDetailView&TermToSearch=16845129&ordinalpos=11&itool=EntrezSystem2.PEntrez.Pubmed.Pubmed_ResultsPanel.Pubmed_RVDocSum link]<br />
# McDonald I. ''Musical alexia with recovery: a personal account.'' Brain. 2006 Oct;129(Pt 10):2554-61. Epub 2006 Sep 7. PMID: 16959814. [http://www.ncbi.nlm.nih.gov/sites/entrez?Db=pubmed&Cmd=ShowDetailView&TermToSearch=16959814&ordinalpos=1&itool=EntrezSystem2.PEntrez.Pubmed.Pubmed_ResultsPanel.Pubmed_RVDocSum link]<br />
# Sridharan D, Levitin DJ, Chafe CH, Berger J, Menon V. ''Neural dynamics of event segmentation in music: converging evidence for dissociable ventral and dorsal networks.'' Neuron. 2007 Aug 2;55(3):521-32. PMID: 17678862. [http://www.ncbi.nlm.nih.gov/sites/entrez?Db=pubmed&Cmd=ShowDetailView&TermToSearch=17678862&ordinalpos=1&itool=EntrezSystem2.PEntrez.Pubmed.Pubmed_ResultsPanel.Pubmed_RVDocSum link]<br />
# Blood, A.J. & Zatorre, R.J. ''Intensely pleasurable responses to music correlate with activity in brain regions implicated with reward and emotion.'' Proceedings of the National Academy of Sciences, 98, pp. 11818-11823 (2001) [http://www.ncbi.nlm.nih.gov/sites/entrez?Db=pubmed&Cmd=ShowDetailView&TermToSearch=11573015&ordinalpos=3&itool=EntrezSystem2.PEntrez.Pubmed.Pubmed_ResultsPanel.Pubmed_RVDocSum link] <br />
# Zatorre, R.J. & Halpern, A.R. ''Mental Concerts: Musical Imagery and Auditory Cortex.'' Neuron, 47, pp. 9-12 (2005) [http://www.ncbi.nlm.nih.gov/sites/entrez?Db=pubmed&Cmd=ShowDetailView&TermToSearch=15996544&ordinalpos=1&itool=EntrezSystem2.PEntrez.Pubmed.Pubmed_ResultsPanel.Pubmed_RVDocSum link]<br />
# Krumhansl, C. L. ''Cognitive Foundations of Musical Pitch.'' New York: Oxford University Press, pp. 16-31 (1990)<br />
# Krumhansl. C.L. ''Music: A Link Between Cognition and Emotion'' (2002) [http://www.blackwell-synergy.com/doi/abs/10.1111/1467-8721.00165 link]<br />
# Current Directions in Psychological Science. Vol. 11 Issue 2 Page 45 April (2002) [http://www.psychologicalscience.org/members/gotoSynergy.cfm?issn=0963-7214&date=2002&volume=11&issue=2 link]<br />
# Krumhansl, C.L. ''A perceptual analysis of Mozart's Piano Sonata K. 282: Segmentation, tension, and musical ideas.'' Music Perception 13 (3):401-432. (1996) [http://ccrma.stanford.edu/~hiroko/SGSI07/Krumhansl_MP1996.pdf link]<br />
<br />
== Project Matrials ==<br />
# Download the materials [http://ccrma.stanford.edu/~hiroko/SGSI07/ link]<br />
==Pre-course assignment==<br />
Please answer the following and e-mail your responses to [mailto:shchon@stanford.edu Song-Hui Chon] <br />
# Succinctly describe what you hope to get out of this course and what you feel you can contribute.<br />
# List five questions regarding music and human musical behavior that you would like to pursue in depth during the week of the summer course.<br />
<br />
<br />
[[Category:Courses]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Realsimple&diff=2597Realsimple2007-10-02T17:05:33Z<p>Wikimaster: </p>
<hr />
<div>The [http://ccrma.stanford.edu/realsimple REALSIMPLE] project is a joint project between [http://www.kth.se/eng/ KTH] in Sweden and [http://ccrma.stanford.edu/ CCRMA], funded by the [http://www.wgln.org/ Wallenberg Global Learning Network]. See the project [http://ccrma.stanford.edu/realsimple home page] for details.<br />
<br />
[[Category:Projects]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Openmixer&diff=2596Openmixer2007-10-02T17:05:00Z<p>Wikimaster: </p>
<hr />
<div>'''openmixer'''<br />
<br />
A fully-extensible software-based dedicated mixing console emulation.<br />
<br />
[[Category:Projects]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Open_Source_for_HighSchool_Multimedia_and_Journalism&diff=2595Open Source for HighSchool Multimedia and Journalism2007-10-02T17:04:40Z<p>Wikimaster: </p>
<hr />
<div>== Project Summary ==<br />
written by Jason Sadural (jsadural@ccrma.stanford.edu)<br />
comments and suggestions always welcomed<br />
<br />
Our purpose is to introduce Tennyson High School Multimedia & Journalism Department students concepts of open source tools into their current projects. Current projects include radio screen plays, weekly news reports, educational videos, music composition, and virtual year books. The Multimedia & Journalism department at Tennyson highschool has existed for 3 years and currently is the most prospering department academically and creatively at Tennyson. Many of the topics discussed will be releated to the design and execution of Astro-physics educational videos I have created. Topics will include dsp effects, localization, video editing software, graphical rendering, and design.<br />
<br />
== Current equipment ==<br />
<br />
#* G4 Desktop<br />
#* Digital Camcorder<br />
#* 3 shure microphones<br />
#* unfinished<br />
<br />
[[Category:Projects]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Mass_project&diff=2594Mass project2007-10-02T17:03:59Z<p>Wikimaster: </p>
<hr />
<div>Welcome to the Masking Ambient Speech Sounds project Wiki.<br />
<br />
== Project Summary ==<br />
<br />
# Recording and diffusion methodologies - testing and implementation<br />
#* Comparison between PZM (4channel) and Sound Field (4 directional)<br />
#* Decision taken for the Sound Filed, because it gives a better space image.<br />
#* Diffusion in a semi-anechoic room, using a 4channel setup.<br />
# Processing from offices recordings<br />
#* Tokyo office recording of all the sounds necessary for Calibration and for Impulse response generation.<br />
#* Processing of impulses responses inside and outside the room<br />
#* Pink noise calibration recording<br />
#* Room calibration through a generalized equalization methodology, using omni microphone recordings in the real room and in the simulated room as comparison.<br />
# FM masker generation<br />
#* Exploration of different strategies to follow (sinusoidal versus random modulation)<br />
#* Critical band (ERB) width of noise bands.<br />
#* Decision taken for using 3 bands, with random walk modulation (way less annoying than sinusoidal modulation).<br />
# Experiment design and implementation<br />
#* Experiment 01<br />
#*: Beta experiment to test system setup (C++ implementation) for real time experiment with automatic data retrieval. Also, fine tune of the experiment psychoacoustic design.<br />
#* Experiment 02<br />
#*: Masker Refinement through a general purpose process in which the best candidates are being selected while the worst are being discarded. This is achieved by varying one parameter at the time, and then moving to the next stage with the best candidate for that parameter, and moving another parameter. For the FM masker, the parameters where Center Band Frequency (3 bands), Band Amplitude, Modulation Rate (for each band), and Amplitude of the Modulation (for each band).<br />
#* Experiment 03<br />
#*: Efficiency, uses Santa Barbara corpus of conversation, in which for 1 masker (that is on throughout the whole experiment) random parts of the conversation are presented and asked if they are either heard or not. The RMS of the random part is recorded, as well as the answer of the subject.<br />
#* Experiment 04<br />
#*: Annoyance, in design process<br />
#Spatialization study<br />
#: Study of special variables in the direccionality of the masker. Generation of a “virtual impulse response” in which the sound (masker) comes from outside the room (where the intruding sound is located) but the filtering effect of the wall is removed.<br />
<br />
== How to setup and calibrate Tascam 3200 mixer ==<br />
<br />
** Detailed instructions<br />
<br />
*1)Start hdspmixer & hdspconf(all settings automatic)<br />
*2)Type in terminal cd /usr/bin/ then cpufreq-selector –g performance (sets maxcpu)<br />
*3)Open jack<br />
a.Set Frames/Period to 1024<br />
b.Set Sample Rate to 44100<br />
c.Set Interface to RME Hammerfall<br />
<br />
Mixer config :<br />
*4)Equalizing levels and linking channels<br />
a.Under “SCREEN MODE/NUMERIC ENTRY” click “METER.FADER”<br />
i.Under tab “CH FADER” set gain levels “CH 1-18” equal<br />
ii.Under tab “Master M/F” set bus levels “BUSS 1-16” equal <br />
b.Under “SCREEN MODE/NUMERIC ENTRY” click “ALT-LINK/GRP”<br />
i.Click “SEL” for channel 1 followed by 2, 3, and 4<br />
ii.Double click tab “GROUP ON/OFF”<br />
iii.Click down curser to set the next grouping<br />
iv.Click “SEL” for channel 5 followed by 6,7, and 8 <br />
*5)Setting the speakers for surround sound<br />
a.Click “SEL” for channel 1<br />
i.Under “OUTPUT ASSIGN” select “1”<br />
ii.Make sure “STEREO” and “DIRECT” are unchecked <br />
b.Click “SEL” for Channel 2<br />
i.Under “OUTPUT ASSIGN” select “3”<br />
ii.Make sure “STEREO” and “DIRECT” are unchecked <br />
c.Repeat this process for the following combinations<br />
i.Ch1:1, CH2:3, CH3:5, CH4:7, CH5:13, CH6:14, CH7:15, CH8:16<br />
ii.Channels 1-4 are head-level and Channel 5-8 are above<br />
*6)Set up I/0 (if is already screwed up)<br />
a.Click “ALT_ROUTING” and click “INPUT”<br />
i.Set CH1 to adat-1, CH2 to adat-2, etc…<br />
ii.If you want to set up a record line, do so setting CH9 to M/L 9<br />
1.Set the top knob and switch to appropriate setting <br />
2.Use CH9 fader to set input level to application<br />
b.Click “ALT-ROUTING” and click “OUTPUT SLOT” for output cards<br />
i.Slot A set Trk1-8 to BUSS 1-8 in sequential order (Horizontal)<br />
ii.Slot B set Trk1-8 to BUSS 9-16 in sequential order (Vertical)<br />
Software Config:<br />
<br />
*7)Setting up the software with hardware<br />
a.Go to Application under Bash shell and type “m”, then “make”, then “go”<br />
b.Play Voice recording and set levels to 25dbA at center<br />
c.Play Masker noise and set levels to 45dBa at center <br />
*8)Go to “MAIN DIALOG” in software app to set ID & output dir then Repeat 6a<br />
<br />
<br />
== Experiment 01 - Beta Test ==<br />
<br />
[[Image:exp1GUI.png|thumb|GUI Experiment 1|250px|right|GUI Experiment 1]]<br />
The first listening tests will involve project staff members to check if things make sense. If it looks good we'll start working with non-project volunteers. ''Experiment 1'', in the the CCRMA "Pit," will take about 30 mins. and involve 30 trials. There will be 6 conditions of masking sound crossed with 5 conditions of speech sounds. The masker (FM noise) and the speech sounds will be presented as if the sources are outside the room. We'll use the measured room model from Tokyo and the exterior sound source position (hallway). The "as if" impression will be created by convolving with the measured impulse responses.<br />
<br />
=== Strategies to define conditions for FM masing noise ===<br />
<br />
To define the conditions of this first experiment, the approach will be to leave all the parameters fixed, except the modulation frequency.<br />
<br />
[http://ccrma.stanford.edu/~jcaceres/yamaha/documentation/experiment01_noises/ Noise set] Contains a complete technical documentation of the masking noise generation. It also contains the soundfiles.<br />
<br />
The conditions of the masking FM noise will be defined by the following criteria:<br />
* 3 bands of FM noise will be used (centered at 200 350 and 500 Hz):<br />
*: This bands are selected based on an analysis of speech voice '''recorded''' in the Tokyo office. The motivation behind this decision is to identify the relevant parameters in the leaking voice. For example, we know that the wall is filtering much of the high frequency components, so that's relevant in the selection of the main frequencies.<br />
* The amplitude (volume) of each band will be fixed:<br />
*: The amplitude was tuned in order to psychoacoustically balance the level of the three noise bands that will be used. This balance was done without modulation.<br />
* The amplitude of the modulation will be proportional to the modulation frequency:<br />
*: The motivation behind this choice is to minimize the annoyance effect. When the modulation rate is low, higher amplitudes are more noticed and annoying.<br />
* The relation between of modulation frequency of the 3 bands is then the main factor to define the conditions:<br />
*: For this experiment, 3 modulation rates are selected, 2, 5 and 7 Hz. The idea is to span some of the frequencies in the range of 2 to 7 Hz. Basically, all the combination of these 3 rates are used for each center frequency, plus a case with no modulation at all.<br />
<br />
=== Findings on the Beta Test ===<br />
<br />
# There is a low frequency of the voice that now is not beeing masked.<br />
# We need to use a really long conversation, that does never repeat during the experiment.<br />
# This corpus of conversations need to have "stationary" properties.<br />
<br />
== Experiment 02 - Masker Refinement==<br />
=== Experiment design ===<br />
* efficiency test <br />
** Stimuli: speech is mixed at randomized places in a stream of masking noise<br />
** Task: "hit the space key when you hear a speech" <br />
** Speech: 5 numbers (one, two, three, four, eight) spoken by a male and a female of different accents. Numbers were chosen so that they cover five vowels. <br />
** Masker: Genetic algorithm approach with human response. We vary one parameter first and then find one or two "sweet spots." Fix the parameter to those found values and vary the next parameter. Choose the best two - repeat this process. <br />
** Analysis: Response rate (response rate is low when speech is masked, we expect.) Response time distribution (more response time when speech is better masked, we expect.) Both analyses can be done within-subject and across-subject. We can also observe what kind of speech is better masked with a particular masking noise.<br />
<br />
=== Implementation ===<br />
[[Image:yamaha_exp2.png|thumb|GUI Experiment 2|250px|right|GUI Experiment 2]]<br />
<br />
*Verbal Instructions:<br />
This is a test where there are only 2 buttons are required, spacebar and (enter)return. You are going to have 2 test runs where you are going to be presented with speech. When you hear any speech you are to press spacebar immediately after to signal us that you heared speech. At the end of each cycle a purple bar will light up to let you know the cycle is ready. You will then press (enter)return to begin the next cycle. The first 2 trials are to get you used to pushing the buttons in response to speech, data will be recorded at the beginning of the third trial testing if you heared speech. <br />
<br />
*Speech used in the experiment where voices by Jason and Hiroko with the intent of neutral stress on vowels. The words chosen were one, two, three, four, eight which were convolved with impulse response from the tokyo conference room combined with recorded room noise.<br />
<br />
=== Post Experiment Subject Interviews ===<br />
*Phase01:<br />
This test had the most diversity in types of sounds. Since some maskers were not effiient, subjects learned about rhythem of speech presented. Subjects clearly described how some sounds worked better in masking then others since they had an idea of how many sounds were coming at what rate for each masker. Subjects enjoyed this test because differences in maskers were clear. <br />
*Phase02:<br />
Out of the bunch of 27 maskers we picked 2 candidates for our "golden masker." For this test we changed the amplitude of different center frequencies for these 2 maskers which gave very different sounds throughout the test. Some subjects found that sounds were noticably much harsher and annoying to listen to then others. Several subjects defined that for one masker, it worked really well in masking and sounded like being on an airplane. Subjects still enjoyed this test because differences in maskers were clear.<br />
*Phase03<br />
At this point we chose 1 masker and used different frequencies of modulation. Most subjects described the sound as droning meaning that it entranced or hypnotized them. This had an effect on most subjects who described the latter half of the test more difficult for them to concentrate. Some subjects claim to almost fall asleep making it difficult to give consistent answers. As I administered the test, I even noticed the sleepy feeling every single time so I started leaving the room during the test. Subjects said that they could hear the female voice very clearly when they would click spacebar (although they would miss more female speech overall). For the male voice that would come through, they would listen for the deep male voice that sounded like short spurts of "wha" and "woo." For the most part, subjects were hitting spacebar when there was speech and not hitting spacebar when they did not hear it as expected from the subjects that I did observe. <br />
*Phase04<br />
Most subjects described the sound as droning meaning that it entranced or hypnotized them as well. This made sense since we kept the same basic sounds but would change the frequency modulation amplitude. The main difference in this test as I would observe subjects is that they would push spacebar repeatedly when there would be no sound presented. This seem to be due to the fact that the 4 speakers above playing the masking sound is uncorrelated and getting random interference patterns. I assume that the sounds that were generated have a interference pattern that was comparable to the speech used ultimately confusing the listener. This effect played a role on all subjects that I observed and I let them continue pushing spacebar throughout the test. Some felt test was too long because they were falling asleep.<br />
<br />
== Experiment 03 - Efficiency ==<br />
<br />
=== Implementation ===<br />
<br />
I've programmed up experiment 3. This uses the Santa Barbara corpus clips in a design that produces a percentage measure of masker effectiveness. It's for one masker (the best one arrived at from experiment 2) at a fixed playback level.<br />
<br />
Jason has convolved the first SB dialog file, so it plays from the "hallway."<br />
<br />
The subject hears a 2 second clip which the app selects randomly from the convolved file. <br />
As it's playing the app records the maximum RMS of the first channel of the clip. <br />
The subject responds with "yes" or "no" buttons according to whether they heard voices.<br />
The app records the response and the maximum RMS played, and then loops, playing the next randomly chosen 2 second clip.<br />
<br />
This iterates a whole bunch of times over 5 minutes producing easily 50 trials per subject.<br />
The analysis plots the percentage of yes response vs. RMS. We should see a threshold RMS below which the clips were effectively masked. <br />
<br />
For the final "efficiency rating" we go back into the convolved dialog file and calculate the percentage of time the signal is below the threshold.<br />
<br />
The dialog chosen and start times are as follows:<br />
<br />
* Santa Barbara Corpus Clips Used<br />
Each clip is 5 minutes long with the start time indicated below. The trackes were normalized then tuned to the appropriate dbFS level in relation to each other to be in the acceptable threshold level for experimentation.<br />
<br />
*TRACK /Start Time /dbFS<br />
#sbc0001 /0:23 /-22.1<br />
#sbc0002 /0:00 /-9.1<br />
#sbc0008 /0:34 /-4.8<br />
#sbc0011 /0:14 /-2.3<br />
#sbc015 /0:00 /-1.9<br />
#sbc020 /0:00 /-4.4<br />
#sbc024 /0:00 /-4.1<br />
#sbc025 /0:00 /-3.5<br />
#sbc027 /0:00 /-7.0<br />
#sbc029 /0:00 /-5.7<br />
#sbc048 /1:15 /-0.8<br />
#sbc050 /2:17 /-6.5<br />
<br />
== Experiment 04 - Annoyance ==<br />
=== Experiment design ===<br />
* annoyance test<br />
** ten kinds of masking noise, silence, white noise with intruding noise, presented from 4 loudspeakers around. <br />
** Each one goes on for 30 seconds (or any length) fading in and out for 5 seconds.<br />
** Fade in the masking noise. Start with the word list, mental math, beep and repeat the word list. Fade out and fade in some enviromental noise (office, traffic, college cafeteria etc.), then next masking noise. <br />
** Word list is presented to the subject from a loudspeaker in front at 60 dBA. <br />
** Task: a word list is presented at the start. A subject does mental math for 30 seconds (6-10 questions.) After the beep, the subject has to recall the word list presented at the start. Masking noise switches with fade in/out with an environmental noise. Do the same task with the next masking noise. <br />
<br />
* Final comparison<br />
** For best 3 masking noises, mix in the typical conference noise (speech, paper shuffle, chair noise, typing sounds, and intruding noise) and ask the subjects which one sounds more "inviting."<br />
=== Aparatus To DO list ===<br />
<br />
*All randomized total 20 minutes<br />
#Subject walks in with ambient noise<br />
#List over speaker of approx 15 words (20sec)<br />
#linear fade of masker during 15 word recital<br />
#beep/flash to start mental math as long as possible (or 2.5 min) 3maskers<br />
#flash to start recital and repeat as much of the list in microphone for as long as they need<br />
#Subject chooses when to start next phase<br />
#quick fade out of masker to next masker while new 15 words played through speaker.<br />
<br />
*Data type<br />
Solutions, time between answers, # of recall word list<br />
<br />
== FM Masking Noises ==<br />
<br />
Variables<br />
*modulation width (critical band or speech sounds)<br />
*modulation rate (0.01 - 0.1 fc) <br />
*sinusoidal or stochastic modulation<br />
<br />
Already fixed<br />
*with broadband noise (what shape, and how loud? - according to the speech)<br />
*band width of the noise (critical band)<br />
*amplitude of each channel (speech sounds spectral distribution)<br />
*number and frequency of center frequencies (3)<br />
<br />
== Conference Call Meetings ==<br />
<br />
=== July 18, 2006 ===<br />
*FM Modulation discussion (Yasushi's Comments, with Juan-Pablo's comment on answer A:):<br />
# Do you have any idea how to specify frequency modulation for each frequency band?<br />
#* A: based on speech freq, ~2-8 Hz<br />
# The period in time for each frequency should be the same?<br />
#* A: No, different. When it's the same the masking efficiency decreases. It seems also more anoying.<br />
# Modulation speed will be getting faster according to higher frequency, or<br />
#* A: I don't know yet, this is going to be the main parameter in the first experiment I think.<br />
# The frequency modulation considering the voice sound<br />
# We have to analyze how the voice sound is modulated in different frequency bands?<br />
#* A: I thiks this is the best way, and we have to consider that the wall is filtering almost all the high frequencies.<br />
<br />
*Discussion of the experiment setup.<br />
<br />
*Look at the documentation, the new example of impulse responses, and delay of arrival.<br />
<br />
=== July 24, 2006 ===<br />
Tuesday 9:30AM '''Japan''' - Monday 5:30PM '''Stanford'''<br />
<br />
* Discuss Experiment 1.<br />
* Ask Atsuko about calibration files and SPL meeter.<br />
* Comment diffusion in the Pit with PZM system (Hiroko).<br />
* Discuss Experiment Design writen by Hiroko and Atsuko.<br />
<br />
=== July 31, 2006 ===<br />
Tuesday 9:30AM '''Japan''' - Monday 5:30PM '''Stanford'''<br />
* Discuss Experiment Design writen by Hiroko and Atsuko.<br />
* Explain experiment setup.<br />
* Discuss Atsuko's agenda at CCRMA.<br />
* Goals for this week are to finsih the setup (C++ and pit room) and collect and analyse some data in a couple of subjects.<br />
<br />
=== August 21, 2006 ===<br />
Tuesday 9:30AM '''Japan''' - Monday 5:30PM '''Stanford'''<br />
<br />
=== August 28, 2006 ===<br />
Tuesday 9:30AM '''Japan''' - Monday 5:30PM '''Stanford'''<br />
<br />
=== September 04, 2006 ===<br />
Tuesday 9:00AM '''Japan''' - Monday 5:00PM '''Stanford'''<br />
<br />
== Links ==<br />
<br />
*[http://ccrma.stanford.edu/~jcaceres/yamaha/documentation/ MASS Technical documentation], we are generating this documentation from the Matlab scripts. All the functions created are also documented.<br />
*[http://ccrma.stanford.edu/~hiroko/yamaha/ Mass project - support materials by Hiroko], with pictures, sounds and PDF documents on psychoacoustic experiment. <br />
*[http://ccrma.stanford.edu/~jcaceres/yamaha/documentation/expy_cpp/html/inherits.html Experiment C++ Source Code Documentation]<br />
<br />
<br />
[[Category:Projects]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=MakerFaire&diff=2593MakerFaire2007-10-02T17:03:29Z<p>Wikimaster: </p>
<hr />
<div>==Introduction==<br />
The [http://ccrma.stanford.edu Center for Computer Research in Music and Acoustics] (CCRMA -- pronounced "karma") is an interdisciplinary center at Stanford University dedicated to artistic and technical innovation at the intersection of music and technology. We are a place where musicians, engineers, computer scientists, designers, and researchers in HCI and psychology get together to develop technologies and make art. In recent years, the question of how we interact physically with electronic music technologies has fostered a growing new area of research that we call Physical Interaction Design for Music. We emphasize practice-based research, using DIY physical prototying with low-cost and open source tools to develop new ways of making and interacting with sound. At the Maker Faire, we will demonstrate the low-cost hardware prototyping kits and our customized open source Linux software distribution that we use to develop new sonic interactions, as well as some exciting projects that have been developed using these tools. Below you will find photos and descriptions of the tools and projects we will demonstrate.<br />
<br />
==Software Tools==<br />
Planet CCRMA at Home is a collection of open source programs that you can add to a computer running Fedora Linux to transform it into an audio/multi-media workstation with a low-latency kernel, current audio drivers and a nice set of music, midi, audio and video applications (with an emphasis on real-time performance). It replicates most of the Linux environment we have been using for years here at CCRMA for our daily work in audio and computer music production and research. Planet CCRMA is easy to install and maintain, and can be upgraded from our repository over the web. Bootable CD and DVD install images are also available. This software is free.<br />
<br />
[http://ccrma.stanford.edu/planetccrma/software http://ccrma.stanford.edu/planetccrma/software]<br />
<br />
<br />
[[Image:Ardour_sm.png]]<br />
<br />
Ardour - Multitrack Sound Editor<br />
<br />
<br />
<br />
[[Image:Hydrogen_sm.png]]<br />
<br />
Hydrogen - Drum Sequencer<br />
<br />
<br />
<br />
[[Image:Pd-jack-jaaa_sm.png]]<br />
<br />
Pd, Jack and Jaaa - Real-time audio tools<br />
<br />
==Hardware Tools==<br />
In our [http://ccrma.stanford.edu/courses/250a/ courses], we use a prototyping kit based on Atmel AVR microcontrollers, with Pascal Stang's [http://hubbard.engr.scu.edu/embedded/avr/boards/index.html#avrminiv40 AVRmini] at the core. To the AVRmini, we attach an I2C LCD display, solderless breadboard strips, a loudspeaker and sometimes a MIDI jack. In student lab exercises and for prototyping, we hook up sensor circuits on the breadboard and send control signals to a Linux PC over USB, serial, MIDI or Ethernet in order to control open source real-time sound synthesis and processing software. These prototypes are then often built into larger-scale music and interactive sound art projects like the ones below that we will demonstrate at the Maker Faire.<br />
<br />
[[Image:Avrboard.jpg]]<br />
<br />
==WaveSaw==<br />
Most commercial electronic instruments limit the control of sound to one-dimensional controls, such as knobs or faders, whose settings are mapped through various levels of abstraction to create a resulting waveform or timbre. The WaveSaw is inspired by a desire to control sound in a direct and physical way. We want to touch a sound, to manipulate it with our hands as if it is a physical object. The WaveSaw is an instrument whose physical shape is mapped directly to the shape of a waveform or spectrum, and by changing the shape of the instrument we change the sound.<br />
<br />
[[Image:WaveSaw.jpg]]<br />
<br />
The WaveSaw is made of a long, flat, saw-like strip of flexible metal with wooden handles on each end with which the user can bend, twist, and rotate the instrument. The shape of the blade is measured by flex sensors along its length. The flex sensor values are sent via a microcontroller to a computer, where a custom Puredata (Pd) object recreates the lengthwise shape of the saw blade as a table. This table is then used as the basis of either scanned synthesis or spectral filtering. In the case of scanned synthesis, the table is used as a wavetable that is scanned at audio rates to generate a pitched tone whose waveform, and hence spectrum, varies with the shape of the saw blade. Similarly, the table can be used as the spectral shape of a multi-band filter through which any signal can be passed. Additionally, the WaveSaw has flex sensors oriented width-wise on the saw that are used to measure the amount of twist applied to the blade, an accelerometer for sensing orientation in space, and a pressure sensitive resistor on one handle to measure how hard the handle is squeezed.<br />
<br />
==Myrtle==<br />
<br />
Myrtle is a music controller that communicates with a computer via OSC (Open Sound Control, an open-ended machine communication protocol) and MIDI simultaneously. The interface is primarily designed for controlling the pitch, amplitude envelope, and rhythm of three sound sources in real-time. Designed in conjunction with the Pd environment, Myrtle currently controls a bank of FM synthesizers via OSC, and can transmit 12 different user selectable MIDI notes via a standard MIDI out port. These notes are triggered real-time using a fader. <br />
<br />
[[Image:Myrtle_whiteback_s.jpg]]<br />
<br />
Myrtle was designed to be used in a live-performance environment, played solo or as part of an ensemble. Instead of an "all-in-one" design, the functions of Myrtle are fairly specific, giving it a unique sound and feel. However, since it is only a controller and not a stand-alone instrument, it can be mapped to any number of different sounds or devices, limited only by the numerical data it puts out. The typical usage of the controller is with the left hand controlling pitch via the foam strips (see below), and the right hand manipulating the various controls on the right side. There are many ways to use the controller differently than this, however. The goal was to create a new and unique tool for musical expression, and integrated into that goal was the idea that Myrtle would have the ability to control audio synthesis in complicated ways, using an intuitive and easy-to-use design. The combination of 3 different controls - a fader, optical sensors, and a series of buttons, used in conjunction with one another , were all integral in achieving this goal.<br />
<br />
[[Image:Myrtle_strips.jpg]]<br />
<br />
Please see the detailed Myrtle project page here: [http://ccrma.stanford.edu/~breeder/projects/myrtle/myrtle.html http://ccrma.stanford.edu/~breeder/projects/myrtle/myrtle.html]<br />
<br />
[[Image:Myrtle_controls.jpg]]<br />
<br />
==Trees of Pythagoras==<br />
The Trees of Pythagoras is an acoustic, electromagnetically-actuated, computer-controlled, long-stringed instrument, with the important distinction of being a single instrument composed of three, physically separate parts. Each piece is, in essence, constructed like a square, extra-large member of the violin family. Each piece consists of a large soundbox connected to a long steel string about ten feet long. Each piece sounds acoustically with a wide dynamic range, but only one unit is intended to be played by a musician. The other two pieces are actuated using electromagnets, which are controlled through a Max/MSP patch. Additionally, all three pieces have piezo-electric transducers which feeds the sound of each unit back to the computer. The Trees of Pythagoras is a concert instrument intended for live performance.<br />
<br />
[[Image:Trees_1.jpg]]<br />
<br />
The three soundboxes are all similar in construction to a member of the violin family. They are constructed using a variety of plywoods, eliminating differences in wood stiffness due to grain direction, thus allowing for the square shape. Different thicknesses and sometimes different cuts of plywood are used for the top and back plates allowing for two different sets of plate resonances. Each has an internal architecture with a soundpost and bass bar. Each unit uses a standard contrabass bridge. An important difference to the violin family, is that the top and back plates of these soundboxes are considerably more flexible allowing for greater coupling with the internal air column resonance. Steel signpost bar is employed as a suitable neck and standard 18 gauge steel wire from the local hardware store is used as a string. The string is freestanding, not unlike an Erhu, and connects to the steel bar at the bottom, lays over the bridge, and is attached at the top of the unit to a tuning peg inserted directly into the steel bar.<br />
<br />
[[Image:Trees_elect.jpg]]<br />
<br />
Two of the three units are actuated using an electromagnet assembly. This assembly is then powered using a standard audio amplifier. I acquired fairly powerful electromagnets which have resistances of around 4 ohms at 0 Hz, like many small speakers. For this instrument, the priority was force. I needed to be able to create large, low-frequency waves on the steel strings at a respectable amplitude. Because I am interested in complex sounds, issues concerning distortion are not important. After much research and experimentation, I created an assembly using two electromagnets, facing each other on either side of the steel string. Using ideas developed by Edgar Berdahl and Steven Backer (see<br />
[http://ccrma.stanford.edu/~sbacker/empp/berdahl_backer.pdf http://ccrma.stanford.edu/~sbacker/empp/berdahl_backer.pdf]), I added two rare-earth magnets on either side of each electromagnet which intensifies the magnetic field. A stereo<br />
audio amplifier is then used to feed the same signal to both electromagnets with the polarity reversed for one so that while one magnet is pushing the other is pulling. This design provides ample force while still being powered by a small wattage amplifier. With too much power, the electromagnets will overheat. To provide additional assistance with this, each<br />
electromagnet is attached to a heat sink. For all three units, I constructed a basic piezo-electric transducer using piezo discs and an op-amp based impedance buffer as a pre-amplifier.<br />
<br />
[[Image:Trees_magnets.jpg]]<br />
<br />
==Accordiatron==<br />
<br />
The Accordiatron is a new MIDI controller for real-time performance based on the paradigm of a conventional squeeze box or concertina. It translates the gestures of a performer to the standard communication protocol of MIDI, allowing for flexible mappings of performance data to sonic parameters. When used in conjunction with a real-time signal processing environment, the Accordiatron becomes an expressive, versatile musical instrument. A combination of sensory outputs providing both discrete and continuous data gives the subtle expressiveness and control necessary for interactive music.<br />
<br />
[[Image:Atron_1.jpg]]<br />
<br />
The Accordiatron detects the rotation and distance between the hands, the latter by means of a potentiometer embedded in the scissor linkage that connects the two end panels. Buttons on either end panel can be used for triggering notes, samples, or any other discrete input. The Accordiatron is based on the premise building a new interface to capture what are known to be expressive performance gestures, but divorcing those gestures from a particular sound source. The Accordiatron is gathering a growing repertoire of compositions using a variety of mappings.<br />
<br />
[[Image:Atron_3.jpg]]<br />
<br />
[http://ccrma.stanford.edu/~gurevich/accordiatron/ http://ccrma.stanford.edu/~gurevich/accordiatron/]<br />
<br />
<br />
[[Category:PID]]<br />
[[Category:Projects]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=MUS_253&diff=2592MUS 2532007-10-02T17:03:00Z<p>Wikimaster: </p>
<hr />
<div>'''Center for Computer Assisted Research in the Humanities (CCARH)'''<br />
<br />
Music 253 - Musical Information: An Introduction<br />
<br />
Course Information - Winter 2007<br />
<br />
Course Homepage: http://253.ccarh.org/<br />
<br />
[[Category:Courses]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Jiyeh&diff=2591Jiyeh2007-10-02T17:02:18Z<p>Wikimaster: </p>
<hr />
<div>'''Jiyeh (2006)'''<br />
for computer generated and processed sound<br />
first performance:<br />
A Concert of Music on Ecology and the Environment, CCRMA, November 9 2006 at the ''Imaging Environment'' conference, Stanford University <br />
http://shc.stanford.edu/events/ImagingEnvSchedule.htm<br />
<br />
Program notes: <br />
Jiyeh (2006) <br />
<br />
Jonathan Berger <br />
<br />
Jiyeh is a small coastal town in Lebanon built upon the ancient city of Porphyreon, <br />
reputed to be the site where a giant fish delivered Jonah to the shore. <br />
On July 14th 2006 a coastal power station in Jiyeh was attacked in an Israeli air strike <br />
causing over 20,000 tons of oil to spill into the Mediterranean Sea. Although there has <br />
been relatively little information regarding the ecological impact of this massive spill a <br />
series of sattelite photos show the dispersion pattern of the oil. These patterns appear <br />
as Baroque-like ornaments that distort the contour of the Lebanese coast line. <br />
<br />
I was in Jerusalem in July 2006 and read a fleeting and innocuous news report regarding <br />
an oil spill on the Lebanese coast apparently caused by an air or ship based missile <br />
attack on an aging power plant in Jiyeh. Little information was forthcoming although the <br />
estimates of the amount of oil spilled were alarming. <br />
In September I asked Jeff Koseff if he had any information about the spill. He replied <br />
that, to his knowledge, there were only sattelite photographs and that those were yet to <br />
be carefully analyzed. Tonight's work, the first of a set of two pieces (this for multi- <br />
channel playback and a second for solo violin, percussion and string orchestra), uses <br />
data from the sattelite photographs to set parameters for synthesis and processing of <br />
sounds, as well as creating source audio material using a raster scan direct synthesis <br />
method bing developed by my PhD student Woon Seung Yeo. <br />
The music represents the evolution of the ornate oil patterns visible in the sattelite <br />
images to evoke auditory display of this disaster. <br />
<br />
Jonathan Berger is a composer and researcher at CCRMA. His compositions include <br />
chamber, symphonic and vocal music as well as works incorporating digital synthesis <br />
and processing. His research includes developing methods and tools for effective <br />
auditory display of complex data. <br />
Berger's recent recording of chamber music for strings will be released this Spring by <br />
Naxos recordings on their American Masters series. <br />
Background:<br />
<br />
----<br />
Details and examples of the sonification methods used.<br />
<br />
Satellite Images: (courtesy DLR, Center for Satellite Based Crisis Information and NASA)<br />
<br />
[[Image:20060716_g.jpg]]<br />
July 15 2006<br />
<br />
[[Image:2-719.jpg]]<br />
July 19 2006<br />
<br />
[[Image:20060723_g.jpg]]<br />
July 23 2006<br />
<br />
[[Image:20060801_g.jpg]]<br />
August 1 2006<br />
<br />
[[Image:804.jpg]]<br />
August 4 2006<br />
<br />
Sonification methods:<br />
<br />
Since the oil dissemenated in a generally northern direction, image scans were done from south to north (by flipping the image).<br />
<br />
Woon Seung Yeo's raster scan synthesis method of image sonification ( http://ccrma.stanford.edu/~woony/works/raster )provided the core sound materials for the piece. <br />
<br />
Raster scann synthesis is described in our DAFX paper: http://www.dafx.ca/proceedings/papers/p_309.pdf.<br />
<br />
<br />
The satellite images were processed and denoised in order to focus on the edges of the coast and of the spill. Examples:<br />
[http://ccrma.stanford.edu/~brg/j1.wav] - July 19<br />
[http://ccrma.stanford.edu/~brg/j2.wav] - July 23<br />
[http://ccrma.stanford.edu/~brg/j3.wav] - August 1<br />
[http://ccrma.stanford.edu/~brg/j4.wav] - August 8<br />
<br />
These sounds were processed using filter settings, temporal stretching and other signal processing methods in which the parameters were all set by measurements of the spill contour in relation to the coastline.<br />
<br />
The width of the spill at each sampled location is sonified by setting filter bandwidth (measured south to north each 25 pixels) at each sample position.<br />
Coastal shape as well as the western edges of the spill in each image. are mapped to melodic pitch.<br />
<br />
Considerable 'artistic license' was then enjpyed.<br />
<br />
<br />
----<br />
Stereo mix-down of 8-channel piece: (beware: it's large!). <br />
<br />
http://ccrma.stanford.edu/~brg/jiyeh-stereo.aif<br />
<br />
Please note this audio file is not for public presentation and may not be copied or distributed.<br />
<br />
<br />
http://creativecommons.org/images/public/somerights20.png<br />
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 2.5 License and ASCAP<br />
<br />
<br />
[[Category:Projects]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Japanese_Film_Club&diff=2590Japanese Film Club2007-10-02T17:01:48Z<p>Wikimaster: </p>
<hr />
<div>During Winter quarter 2007, We will cover masterpieces by Akira Kurosawa: Weekly Thursday screenings from 8 pm in the CCRMA classroom, and a few weekend screenings for longer films. <br />
<br />
* Thu. Jan. 25, 8 pm: The hidden fortress - 隠し砦の三悪人 139 mins/monochrome (1958) ZDVD 11415<br />
* Thu. Feb. 1, 8 pm: Rasho-Mon - 羅生門 88 mins/monochrome (1950) ZDVD 2853<br />
* Sat. Feb. 10, 2 pm: The Seven Samurai - 七人の侍 207 mins/monochrome (1954) ZDVD 11796<br />
* Thu. Feb. 15, 8 pm: Rhapsody in August - 八月の狂詩曲 97 mins/color (1991) ZDVD 7776<br />
* Thu. Feb. 22, 8 pm: Drunken Angel - 醉いどれ天使 98 mins/monochrome (1948) ZVC 1148<br />
* Sat. Feb. 24, 2 pm: Ran - 乱 162 mins/color (1985) ZDVD 11144 DISC 1<br />
* Fri. Mar. 10, 11 pm: Yojimbo - 用心棒 110 mins/monochrome (1961) ZDVD 829<br />
* Thu. Mar. 15, 8 pm: Madadayo (aka Not Yet) - まあだだよ 134 mins/color (1993) ZDVD 1648<br />
<br />
----<br />
Thursday films<br />
* Dodesuka-Den - どですかでん 140 mins/color (1970) ZVC 1150<br />
* To Live - 生きる 143 mins/monochrome (1952) ZDVD 5709 DISC 1<br />
* Tokyo Story by Ozu Yasujiro - 小津安二郎: 東京物語 136 mins/monochrome (1953) ZDVD 5312 DISC 1<br />
<br />
Longer weekend films <br />
* The Red Beard - 赤ひげ 185 mins/monochrome (1965) ZDVD 3722<br />
* The Shadow Warrior - 影武者 179 mins/color (1980) ZDVD 9781 DISC 2<br />
<br />
[[Category:Projects]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Gloves_of_Shaolin&diff=2589Gloves of Shaolin2007-10-02T17:01:06Z<p>Wikimaster: </p>
<hr />
<div>== Project Summary ==<br />
written by Jason Sadural (jsadural@ccrma.stanford.edu)<br />
comments and suggestions always welcomed<br />
<br />
Shaolin martial arts can be traced fourteen hundred years ago to the [http://en.wikipedia.org/wiki/Shaolin Shaolin Temple] in the Hunan Province of China. Their monks taught [http://en.wikipedia.org/wiki/Shaolin_kung_fu Shaolin kung fu] to many students who then traveled to other countries spreading this fighting system. Shaolin kung fu was based on the movements of five animals; dragon, tiger, snake, leopard, and the crane. The Gloves of Shaolin is an interactive instrument that communicates through OSC with PD. The purpose of the gloves is to be able to trigger different animal styles(modes) the user is imitating and be able to sonify Shaolin Katas. One application is inteded for [http://cm-wiki.stanford.edu/wiki/Astro-Sonification astro-sonification] in which style will be able to throw virtual projectiles that will move audially in space with physical properties such as inertia and resistence with a homogeneous or in homogeneous [http://scienceworld.wolfram.com/physics/Ether.html ether].<br />
<br />
<br />
<br />
''' '' Pics and Specs coming soon...'' '''<br />
<br />
[[Category:PID]][[Category:Projects]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Computer_music_related_conferences_and_publications&diff=2588Computer music related conferences and publications2007-10-02T16:59:23Z<p>Wikimaster: </p>
<hr />
<div>This is a place to collect information about conferences and publications related to computer music. Useful information includes conference deadlines and dates, link to publication homepage, how to get a copy of the publication at CCRMA.<br />
<br />
== Journals with articles available online==<br />
[http://www.mitpressjournals.org/loi/comj Computer Music Journal (MIT Press)]<br />
<br />
<br />
AES Convention and Conference preprints, Journal of AES<br />
<br />
[http://www.aes.org/e-lib/inst http://www.aes.org/e-lib/inst]<br />
<br />
<br />
<br />
[http://asa.aip.org/ Acoustical Society of America], [http://scitation.aip.org/jasa/ JASA]<br />
<br />
<br />
<br />
[http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=78 IEEE Transactions on Signal Processing]<br />
<br />
[http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=10376 IEEE Transactions on Audio, Speech, and Language Processing ]<br />
<br />
[http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=97 IEEE Signal Processing Letters]<br />
<br />
[http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=79 IEEE Signal Processing Magazine]<br />
<br />
<br />
<br />
[http://www.tandf.co.uk/journals/titles/09298215.asp Journal of New Music Research] (Taylor & Francis)<br />
<br />
<br />
== Conferences ==<br />
<br />
Gary Scavone's list<br />
http://www.music.mcgill.ca/~gary/conferences.html<br />
<br />
<br />
'''DAFx07 http://dafx.labri.fr/'''<br />
<br />
Proceedings available on conference web sites.<br />
<br />
<br />
'''International Congress on Acoustics (ICA)''' http://www.ica2007madrid.org/<br />
<br />
Proceedings for some years in SAL and SAL3<br />
<br />
<br />
'''International Symposium on Musical Acoustics (ISMA)''' <br />
<br />
<br />
'''[http://www.computermusic.org/ International Computer Music Conference ]'''<br />
<br />
'''ICMC 07''' http://www.icmc2007.net/<br />
<br />
Proceedings for some years in Music Library<br />
<br />
<br />
'''AES Conferences and Conventions''' http://www.aes.org/events/<br />
<br />
available online http://www.aes.org/e-lib/inst/<br />
<br />
<br />
'''WASPAA 2007''' http://www.kecl.ntt.co.jp/icl/signal/waspaa2007/<br />
<br />
available on [http://ieeexplore.ieee.org IEEExplore]<br />
<br />
<br />
'''Asilomar Conference on Signals, Systems, and Computers''' http://www.asilomarssc.org/<br />
<br />
available on [http://ieeexplore.ieee.org IEEExplore]<br />
<br />
<br />
'''ICASSP'''<br />
<br />
available on [http://ieeexplore.ieee.org IEEExplore]<br />
<br />
<br />
<br />
== See also ==<br />
<br />
[[Upcoming music technology conferences]]<br />
<br />
[[Category:General]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Category:General&diff=2587Category:General2007-10-02T16:58:47Z<p>Wikimaster: </p>
<hr />
<div>General Topics</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Computer_Music&diff=2586Computer Music2007-10-02T16:58:37Z<p>Wikimaster: </p>
<hr />
<div>Computer music is ''music'' created by or with the aid of a ''computer''.<br />
<br />
==History of Computer Music==<br />
*1956: ''Illiad Suite'', arguably the first computer-aided musical composition, by Lejaren A. Hiller and Leonard M. Isaacson, then of the University of Illinois at Urbana-Champaign, premiers.<br />
*1957: Max Mathews (then of Bell Labs) writes MUSIC, a program for creating sound (including musical sound) with a computer.<br />
*1958: The Experimental Music Studios of UIUC are founded by Lejaren A. Hiller.<br />
*1958: The Columbia-Princeton Electronic Music Center is founded by Vladimir Ussachevsky (1911-1990) and Otto Luening (1900-1996) with a grant from the Rockefeller Foundation.<br />
*1964: Jean-Claude Risset arrives at Bell Labs and creates world's first computer-generated trumpet sound.<br />
*1972: UCSD Center for Music Experiment is founded, with Roger Reynolds as founding director.<br />
*1975: CCRMA (Stanford University) is founded by John Chowning and Leland Smith.<br />
*1977: IRCAM (Paris) opens, with Pierre Boulez at the helm, and Luciano Berio, Vinko Globokar, Jean-Claude Risset, and Max Mathews included as administrators.<br />
*1979: F. Richard Moore joins Music Faculty of UCSD, founding the Computer Audio Research Laboratory (CARL).<br />
*1980s: Columbia and Princeton dissociate regarding the Electronic Music Center.<br />
*1984: MIT Media Lab is founded by Nicholas Negroponte and former MIT President Jerome Wiesner. Barry Vercoe and Tod Machover are founding members.<br />
*1984: The Society for Electro-Acoustic Music in the United States (SEAMUS) is founded.<br />
*1987: CNMAT (Berkeley, CA) is founded by Richard Felciano.<br />
<br />
[[Category:General]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=CCRMA_Studio_Guides&diff=2585CCRMA Studio Guides2007-10-02T16:57:46Z<p>Wikimaster: </p>
<hr />
<div>Coming soon, guides for Studios C, D, E.<br />
<br />
[[Category:CCRMA User Guide]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Ambisonics_and_Impulse_Response&diff=2584Ambisonics and Impulse Response2007-10-02T16:56:58Z<p>Wikimaster: </p>
<hr />
<div>== Project Summary ==<br />
written by Jason Sadural (jsadural@ccrma.stanford.edu)<br />
comments and suggestions always welcomed<br />
<br />
The goal is to define a methodology for creating speaker arrays for ambisonic playback.<br />
<br />
[[Category:Projects]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Category:Projects&diff=2583Category:Projects2007-10-02T16:56:20Z<p>Wikimaster: </p>
<hr />
<div>Ongoing works and projects at CCRMA.</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Ambisonic_Theater&diff=2582Ambisonic Theater2007-10-02T16:55:50Z<p>Wikimaster: </p>
<hr />
<div>== Project Summary ==<br />
written by Jason Sadural (jsadural@ccrma.stanford.edu)<br />
comments and suggestions always welcomed<br />
<br />
In creating Ambisonic compositions, the audio exists as a data set that is represented in a 3D space. The advantages to this architecture is that a piece can be transported to arbitrary locations and be rendered relatively quickly for various speaker configurations maintaining intended spatial properties. Ambisonics soundfield microphones give rise to the ability to record and recreate a "soundfield" in xyzw(w being pressure variable). The wavefronts produced by multi-channel ambisonic playback retains recorded audial cues including room size and reflective properties. In this experiment it is necessary to define a platform in which a 3d graphical representation and ambisonic composition tools are rendered simultaneously in which the user can choose to spatialize visually or algorithmically. The goal is to convincingly create virtual sources interacting with actual recorded soundfields and simultaneously have the virtual image interact with the actual HD/Imax recorded image.<br />
<br />
[[Category: Projects]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Category:Courses&diff=2581Category:Courses2007-10-02T16:55:12Z<p>Wikimaster: </p>
<hr />
<div>Course Wiki pages.</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=220a-fall-2007&diff=2580220a-fall-20072007-10-02T16:54:53Z<p>Wikimaster: </p>
<hr />
<div>= FAQ =<br />
<br />
* ''' What is "terminal" and how do I use it?'''<br />
<br />
''Terminal is a way to type commands to the Linux operating system, to navigate between directories (folders), copy files, run programs, and do many other useful things.''<br />
<br />
> To open a terminal, right-click the mouse anywhere on the screen and select "New Terminal".<br />
<br />
A terminal will open, and a command line prompt will appear that identifies the computer (cmn#), your login name and the '''working directory''', which is the location in the directory structure from which you are navigating. After the prompt, you can type commands.<br />
<br />
For example, you can navigate around the file system by using the "change directory" command:<br />
'''cd pd-lab'''<br />
will navigate into your pd-lab directory.<br />
<br />
> If you want to see your current location in the directory structure, type<br />
'''pwd'''<br />
which stands for "print working directory".<br />
<br />
> If you want to see what files and folders are in the working directory, type the "list" command:<br />
'''ls'''<br />
<br />
> If you type a command and then a new command prompt doesn't show up again, hit the return key and a new prompt should appear. Before entering new commands, it is important to have a new command prompt.<br />
<br />
<br />
<br />
* ''' How do I copy files from the course directory into my project directory?'''<br />
<br />
In a terminal, you can use Linux commands (based on Unix commands, tutorial:[http://www.ee.surrey.ac.uk/Teaching/Unix/]) to copy files.<br />
<br />
> The copy command, '''cp''', allows you to make a copy of one file to whatever location you specify.<br />
<br />
For example, if you have a project directory called "pd-lab" and you want to copy the "straightWire" Pd patch from the course examples directory, after the command prompt you would type exactly (including spaces):<br />
<br />
'''cp /usr/ccrma/web/html/courses/220a-fall-2007/pd/straightWire.pd .'''<br />
<br />
The period at the end indicates the location where the file will be copied is the working directory.<br />
<br />
> If you want to copy an entire "pd" directory and its sub-directories, you would type:<br />
<br />
'''cp -r /usr/ccrma/web/html/courses/220a-fall-2007/pd/* .'''<br />
<br />
<br />
<br />
* ''' How can I get information about how to use Linux commands?'''<br />
<br />
The "man pages" (manual) can be accessed by typing: '''man''' followed by a space and whatever command you want to find out about. For example, '''man cp''' will pull up the man page for the copy command. Hit the "q" key (quit) to exit the <br />
manual.<br />
<br />
<br />
* '''How do I play MP3s on the Linux machines?'''<br />
<br />
Use the XMMS media player.<br />
<br />
<br />
<br />
* '''Where can I get the software for my own computer?'''<br />
<br />
> If you have a computer that runs Linux, or a PC you want to run Linux on, you can install Planet CCRMA: [http://ccrma.stanford.edu/planetccrma/software/] and get all the software we use here. Talk with Nando or Carr for more info or to get help with this.<br />
<br />
> If you have a Mac or Windows machine, there are versions of some of the applications used in the class that you can download for free and install on your computer.<br />
<br />
get Pd: [http://www.puredata.org/]<br />
<br />
get Audacity: [http://audacity.sourceforge.net/]<br />
<br />
get ChucK: [http://chuck.cs.princeton.edu/]<br />
<br />
<br />
<br />
* '''Why are there so many steps in the labs?'''<br />
<br />
Working with music making tools on a computer can require working with the computer on multiple levels, depending on the software tool(s) and the nature of the musical project. <br />
<br />
In this course we will be exploring a range of ways of working with computers to make music. Many steps and checking settings are part of the process. Details are explained and covered in the labs, classes, and via conversations with other students and the instructors. Please ask questions!<br />
<br />
<br />
* '''Can I login to the CCRMA system from my home computer?'''<br />
<br />
1) From a terminal on your own computer, use the ssh (secure shell) command; instructions are here: [http://cm-wiki.stanford.edu/wiki/Remote_Access]<br />
<br />
<br />
<br />
* '''How do I transfer files from the CCRMA system to my home computer or vice versa?'''<br />
<br />
1) From a terminal on your own computer, use the '''scp''' (secure copy) command:<br />
<br />
'''scp you@ccrma-gate.stanford.edu: /thepathofthefileyouwanttotransfer .'''<br />
<br />
or, for copying an entire directory:<br />
<br />
'''scp you@ccrma-gate.stanford.edu: /thepathofthedirectoryyouwanttotransfer/* .'''<br />
<br />
<br />
2) Use SFTP (secure file transfer protocol) from a variety of applications, including the "Go" menu - "Connect to Server" on Mac OS X. You will login with your CCRMA name and password.<br />
<br />
[[Category: Courses]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=MediaWiki:Mainpage&diff=2480MediaWiki:Mainpage2007-10-01T19:01:46Z<p>Wikimaster: </p>
<hr />
<div>Special:Categories</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Center_for_Computer_Research_in_Music_and_Acoustics&diff=2310Center for Computer Research in Music and Acoustics2007-08-14T00:11:30Z<p>Wikimaster: </p>
<hr />
<div><blockquote><br />
The Stanford University Center for Computer Research in Music and Acoustics is a multi-disciplinary facility where composers and researchers work together using computer-based technology both as an artistic medium and as a research tool.<br />
</blockquote><br />
<br />
[http://cm-wiki.stanford.edu/wiki/Special:Allpages Current Pages]<br />
<br />
== Recently added pages ==<br />
<br />
* [[realsimple]] project overview<br />
<br />
[[Category:Main]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Center_for_Computer_Research_in_Music_and_Acoustics&diff=2309Center for Computer Research in Music and Acoustics2007-08-14T00:10:54Z<p>Wikimaster: </p>
<hr />
<div><div style="color:blue"><br />
The Stanford University Center for Computer Research in Music and Acoustics is a multi-disciplinary facility where composers and researchers work together using computer-based technology both as an artistic medium and as a research tool.<br />
</div><br />
<br />
[http://cm-wiki.stanford.edu/wiki/Special:Allpages Current Pages]<br />
<br />
== Recently added pages ==<br />
<br />
* [[realsimple]] project overview<br />
<br />
[[Category:Main]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Center_for_Computer_Research_in_Music_and_Acoustics&diff=2308Center for Computer Research in Music and Acoustics2007-08-14T00:10:14Z<p>Wikimaster: </p>
<hr />
<div><div style="color:blue">The Stanford University Center for Computer Research in Music and Acoustics is a multi-disciplinary facility where composers and researchers work together using computer-based technology both as an artistic medium and as a research tool.</div><br />
<br />
[http://cm-wiki.stanford.edu/wiki/Special:Allpages Current Pages]<br />
<br />
== Recently added pages ==<br />
<br />
* [[realsimple]] project overview<br />
<br />
[[Category:Main]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Center_for_Computer_Research_in_Music_and_Acoustics&diff=2307Center for Computer Research in Music and Acoustics2007-08-14T00:08:52Z<p>Wikimaster: </p>
<hr />
<div><div color="blue">The Stanford University Center for Computer Research in Music and Acoustics is a multi-disciplinary facility where composers and researchers work together using computer-based technology both as an artistic medium and as a research tool.</div><br />
<br />
[http://cm-wiki.stanford.edu/wiki/Special:Allpages Current Pages]<br />
<br />
== Recently added pages ==<br />
<br />
* [[realsimple]] project overview<br />
<br />
[[Category:Main]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Center_for_Computer_Research_in_Music_and_Acoustics&diff=2306Center for Computer Research in Music and Acoustics2007-08-14T00:06:26Z<p>Wikimaster: </p>
<hr />
<div><div>The Stanford University Center for Computer Research in Music and Acoustics is a multi-disciplinary facility where composers and researchers work together using computer-based technology both as an artistic medium and as a research tool.</div><br />
<br />
[http://cm-wiki.stanford.edu/wiki/Special:Allpages Current Pages]<br />
<br />
== Recently added pages ==<br />
<br />
* [[realsimple]] project overview<br />
<br />
[[Category:Main]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Category:Main&diff=2305Category:Main2007-08-13T23:50:13Z<p>Wikimaster: </p>
<hr />
<div>Main Page Catagories</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Center_for_Computer_Research_in_Music_and_Acoustics&diff=2304Center for Computer Research in Music and Acoustics2007-08-13T23:49:52Z<p>Wikimaster: </p>
<hr />
<div>The Stanford University Center for Computer Research in Music and Acoustics is a multi-disciplinary facility where composers and researchers work together using computer-based technology both as an artistic medium and as a research tool.<br />
<br />
[http://cm-wiki.stanford.edu/wiki/Special:Allpages Current Pages]<br />
<br />
== Recently added pages ==<br />
<br />
* [[realsimple]] project overview<br />
<br />
[[Category:Main]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Category:PID_2007&diff=2303Category:PID 20072007-08-13T23:49:33Z<p>Wikimaster: </p>
<hr />
<div>This is the Physical Interaction Design Workshop Catagory page.<br />
<br />
[[Category:Main]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Catagory:Main&diff=2302Catagory:Main2007-08-13T23:48:18Z<p>Wikimaster: </p>
<hr />
<div>Main Catagories</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Category:PID_2007&diff=2301Category:PID 20072007-08-13T23:48:01Z<p>Wikimaster: </p>
<hr />
<div>This is the Physical Interaction Design Workshop Catagory page.<br />
<br />
[[Catagory:Main]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Center_for_Computer_Research_in_Music_and_Acoustics&diff=2300Center for Computer Research in Music and Acoustics2007-08-13T23:47:32Z<p>Wikimaster: </p>
<hr />
<div>The Stanford University Center for Computer Research in Music and Acoustics is a multi-disciplinary facility where composers and researchers work together using computer-based technology both as an artistic medium and as a research tool.<br />
<br />
[http://cm-wiki.stanford.edu/wiki/Special:Allpages Current Pages]<br />
<br />
== Recently added pages ==<br />
<br />
* [[realsimple]] project overview<br />
<br />
[[Catagory:Main]]</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Center_for_Computer_Research_in_Music_and_Acoustics&diff=2299Center for Computer Research in Music and Acoustics2007-08-13T23:46:30Z<p>Wikimaster: </p>
<hr />
<div>The Stanford University Center for Computer Research in Music and Acoustics is a multi-disciplinary facility where composers and researchers work together using computer-based technology both as an artistic medium and as a research tool.<br />
<br />
[http://cm-wiki.stanford.edu/wiki/Special:Allpages Current Pages]<br />
<br />
== Recently added pages ==<br />
<br />
* [[realsimple]] project overview</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Center_for_Computer_Research_in_Music_and_Acoustics&diff=2298Center for Computer Research in Music and Acoustics2007-08-13T23:46:21Z<p>Wikimaster: </p>
<hr />
<div>The Stanford University Center for Computer Research in Music and Acoustics is a multi-disciplinary facility where composers and researchers work together using computer-based technology both as an artistic medium and as a research tool.<br />
<br />
<br />
[http://cm-wiki.stanford.edu/wiki/Special:Allpages Current Pages]<br />
<br />
== Recently added pages ==<br />
<br />
* [[realsimple]] project overview</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Center_for_Computer_Research_in_Music_and_Acoustics&diff=2297Center for Computer Research in Music and Acoustics2007-08-13T23:45:07Z<p>Wikimaster: </p>
<hr />
<div>The Stanford University Center for Computer Research in Music and Acoustics is a multi-disciplinary facility where composers and researchers work together using computer-based technology both as an artistic medium and as a research tool.<br />
<br />
<br />
----<br />
<br />
<br />
[http://cm-wiki.stanford.edu/wiki/Special:Allpages Current Pages]<br />
<br />
== Recently added pages ==<br />
<br />
* [[realsimple]] project overview</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Center_for_Computer_Research_in_Music_and_Acoustics&diff=2296Center for Computer Research in Music and Acoustics2007-08-13T23:42:35Z<p>Wikimaster: </p>
<hr />
<div>The Stanford University Center for Computer Research in Music and Acoustics is a multi-disciplinary facility where composers and researchers work together using computer-based technology both as an artistic medium and as a research tool.<br />
<br />
<br />
<br />
[http://cm-wiki.stanford.edu/wiki/Special:Allpages Current Pages]<br />
<br />
== Recently added pages ==<br />
<br />
* [[realsimple]] project overview</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Center_for_Computer_Research_in_Music_and_Acoustics&diff=2295Center for Computer Research in Music and Acoustics2007-08-13T23:41:05Z<p>Wikimaster: </p>
<hr />
<div>== The Stanford University Center for Computer Research in Music and Acoustics is a multi-disciplinary facility where composers and researchers work together using computer-based technology both as an artistic medium and as a research tool.==<br />
<br />
<br />
<br />
[http://cm-wiki.stanford.edu/wiki/Special:Allpages Current Pages]<br />
<br />
== Recently added pages ==<br />
<br />
* [[realsimple]] project overview</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Center_for_Computer_Research_in_Music_and_Acoustics&diff=2294Center for Computer Research in Music and Acoustics2007-08-13T23:40:51Z<p>Wikimaster: </p>
<hr />
<div><br />
== The Stanford University Center for Computer Research in Music and Acoustics is a multi-disciplinary facility where composers and researchers work together using computer-based technology both as an artistic medium and as a research tool.<br />
==<br />
<br />
<br />
<br />
[http://cm-wiki.stanford.edu/wiki/Special:Allpages Current Pages]<br />
<br />
== Recently added pages ==<br />
<br />
* [[realsimple]] project overview</div>Wikimasterhttps://ccrma.stanford.edu/mediawiki/index.php?title=Center_for_Computer_Research_in_Music_and_Acoustics&diff=2293Center for Computer Research in Music and Acoustics2007-08-13T23:38:31Z<p>Wikimaster: </p>
<hr />
<div>[http://cm-wiki.stanford.edu/wiki/Special:Allpages Current Pages]<br />
<br />
== Recently added pages ==<br />
<br />
* [[realsimple]] project overview</div>Wikimaster