https://ccrma.stanford.edu/mediawiki/api.php?action=feedcontributions&user=Mulshine&feedformat=atomCCRMA Wiki - User contributions [en]2024-03-28T10:28:29ZUser contributionsMediaWiki 1.24.1https://ccrma.stanford.edu/mediawiki/index.php?title=Colloquium&diff=24145Colloquium2023-01-26T01:11:01Z<p>Mulshine: /* Winter Quarter (2023) */</p>
<hr />
<div>'''Wednesday 5:30pm PT (CCRMA Classroom & Zoom)<br />
<br />
The CCRMA Colloquium is a weekly gathering of CCRMA students, faculty, staff, and guests. It is an opportunity for members of the CCRMA community and invited speakers to share the work that they are doing in the fields of Computer Music, Audio Signal Processing and Music Information Retrieval, Psychoacoustics, and related fields. The colloquium traditionally happens every Wednesday during the academic year from 5:30 &#8211; 7:00pm and meets in the CCRMA Classroom, Knoll 217, often also with a Zoom presence.<br />
<br />
Nette and Matt and this wiki page are organizing 2023 colloquia.<br />
<br />
<br />
= Winter Quarter (2023)=<br />
*'''1/11 (Week 1) - Kanru Hua, CEO of Synthesizer V, Japan (https://dreamtonics.com/en/synthesizerv/)(Contact: MAMST Student Benny (Shicheng) Zhang<br />
*'''1/18 (Week 2) - Game and International Snacks Night (Nette & Kunwoo)<br />
*'''1/25 (Week 3) - Community-wide rapid-fire talks! ''' Share your recent thoughts, explorations, work, hobbies. Get to know what your peers are up to. Everyone is invited to share. <br />
**Speaker 1: Mike Mulshine<br />
**Speaker 2: Ge Wang<br />
**Speaker 3: Jarek Kapuscinski and Eito Murakami<br />
**Speaker 4: Fernando Lopez-Lezcano <br />
**Speaker 5: Chris Chafe<br />
**Speaker 6: Julius Smith 1 minute announcement of Music 423 Thursdays 4:30 pm in Seminar Room - 20 min update tomorrow (there)<br />
**Speaker 7: Kimia Koochakzadeh-Yazdi<br />
**Speaker 8: Travis Skare<br />
**Speaker 9: Matt Wright<br />
**Speaker 10: Julie Zhu<br />
**Speaker 11: Nima Farzaneh<br />
**Speaker 12: <br />
*'''2/1 (Week 4) - TBD / Skills Share (Matt can organize)<br />
*'''2/8 (Week 5) - Laura Steenberge meet in Listening Room (Chris Chafe)<br />
*'''2/15 (Week 6) - Scott Oshiro [quantum music] / Mischa Dohler [Ericsson, low-latency XR] (Chris Chafe)<br />
*'''2/22 (Week 7) - Webchuck (Chris Chafe + lots of webchuck contributors)<br />
*'''3/1 (Week 8) - TBD / Music Business talk from Anne Van Der Erve from Warner Music Benelux <br />
*'''3/8 (Week 9) - [https://perbloland.com/ Per Bloland]<br />
*'''3/15 (Week 10) - [https://www.jayafrisando.com Jay Afrisando] (Constantin)<br />
<br />
= Spring Quarter (2023)=<br />
*'''4/5 (Week 1) - [https://adamstanovic.com Adam Stanović] (Constantin)<br />
*'''4/12 (Week 2) - (tentative) Graham Wakefield ([https://artificialnature.net/#tab-artists Artificial Nature], [https://cycling74.com/books/go Generating Sound and Organizing Time]) (Matt)<br />
*'''4/19 (Week 3) - Matt Wright, Fernando Lopez-Lezcano / Bing while the GRAIL is set up (w/ CC)<br />
*'''4/26 (Week 4) - Aurie Hsu (Julia Mills) tentatively<br />
*'''5/3 (Week 5) - Conference-style talks. ''' This includes longer form presentations or lectures. Reach out to Mike (mulshine@stanford.edu) to sign up. <br />
*'''5/10 (Week 6) - TBD / Skills share (Matt can organize)<br />
*'''5/17 (Week 7) - Gareth Loy -- Introduction to the Player programming language<br />
*'''5/24 (Week 8) - TBD<br />
*'''5/31 (Week 9) - MST Capstone Project showcase?<br />
*'''6/7 (Week 10) - MST Capstone Project showcase?<br />
<br /><br /><br /><br /><br />
<br />
= Past - Fall Quarter (2022)=<br />
<br />
*'''09/28 (Week 1) New Student Introductions<br />
**Speaker 1: Sneha Shah<br />
**Speaker 2: Josh Mitchell<br />
**Speaker 3: Emily Kuo<br />
**Speaker 4: Balazs & Truls<br />
**Speaker 5: Julia Yu<br />
**Speaker 6: Senyuan Fan<br />
**Speaker 7: Celeste Betancur<br />
**Speaker 8: Victoria Litton<br />
**Speaker 9: Yiheng Dong<br />
**Speaker 10 Eito Murakami<br />
**Speaker 11 Soohyun Kim<br />
**Speaker 12 Luna Valentin<br />
**Speaker 13 Terry Feng<br />
**Speaker 14 Alex Han<br />
**Speaker 15 Benny Zhang<br />
**Speaker 16 Sami Wurm<br />
**Speaker 17 Neha Rajagopalan<br />
<br />
*'''10/05 (Week 2) Faculty and Staff Rapid Fire<br />
**Speaker 1 Chris Chafe<br />
**Speaker 2 Ge Wang<br />
**Speaker 3 Patricia Alessandrini<br />
**Speaker 4 Marina Bosi<br />
**Speaker 5 Julius Smith<br />
**Speaker 6 Jarek Kapuściński<br />
**Speaker 7<br />
**Speaker 8<br />
**Speaker 9<br />
**Speaker 10 Mark Rau<br />
**Speaker 11<br />
**Speaker 12<br />
**Speaker 13<br />
**Speaker 14<br />
**Speaker 15<br />
<br />
*'''10/12 (Week 3) WINGS and Peer Mentoring in Music ''' (Mara Mills visit postponed to Spring)<br />
<br />
*'''10/19 (Week 4) Faculty and Staff Rapid Fire, part 2<br />
**Speaker 1 Takako Fujioka<br />
**Speaker 2 Eleanor Selfridge-Field<br />
**Speaker 3 Craig Stuart Sapp<br />
**Speaker 4 Craig Stuart Sapp<br />
**Speaker 5 Poppy Crum<br />
**Speaker 6 Jonathan Berger<br />
**Speaker 7 Matt Wright<br />
**Speaker 8 Fernando Lopez-Lezcano<br />
**Speaker 9 Constantin Basica<br />
**Speaker 10 Stephanie Sherriff<br />
**Speaker 11<br />
**Speaker 12<br />
**Speaker 13<br />
**Speaker 14<br />
**Speaker 15<br />
<br />
*'''10/26 (Week 5) Romain Michon, Tanguy Risset, Maxime Popoff : [https://ccrma.stanford.edu/events/high-level-programming-of-fpgas-audio-real-time-signal-processing-applications High-Level Programming of FPGAs for Audio Real-Time Signal Processing Applications]<br />
<br />
*'''11/2 (Week 6) - TBD<br />
<br />
*'''11/9 (Week 7) - Pizza & Pegagody: Assignments and Evaluations<br />
<br />
*'''11/16 (Week 8) - Student-only Town Hall<br />
<br />
*'''11/23 (Week 9) - NO SEMINAR (Thanksgiving break)<br />
<br />
*'''11/30 (Week 10) - planning session for Winter and Spring colloquium<br />
<br />
<br /><br />
<br />
= Spring Quarter (2022)=<br />
<br />
*'''03/30 - Spring Welcome Dinner at Treehouse'''<br />
*'''04/06 - In-House Project and Research Updates (Everyone is encouraged to present)'''<br />
*'''04/13 - BREAK'''<br />
*'''04/19* Tuesday - Installation by J. Mills'''<br />
*'''04/27 - [http://maramills.org/ Mara Mills] (Virtual Talk)'''<br />
*'''05/04 - Inclusive Teaching Workshop (Lloyd May & CTL)'''<br />
*'''05/11 - BREAK '''<br />
*'''05/18 - BREAK'''<br />
*'''05/25 - BREAK'''<br />
*'''06/01 - BREAK'''<br />
<br />
<br />
*''' Future Colloquiums already booked:'''<br />
*10/12 Mara Mills - History of PCM<br />
<br />
= Past - Winter Quarter (2022)=<br />
*'''01/05 - BREAK'''<br />
<br />
*'''01/12 - David Kanaga''' Composer/designer behind the games: [https://store.steampowered.com/app/223450/Dyad/ Dyad], [https://store.steampowered.com/app/219680/Proteus/ Proteus], [https://store.steampowered.com/app/284260/PANORAMICAL/ PANORAMICAL], & the podcast-opera [https://player.fm/series/soft-valkyrie Soft Valkyrie] ([https://stanford.zoom.us/rec/play/mieXwoFVZvaXBeyMXjVYfECaDWF1g-cPB6crtb7F4XDWLBZd9VQrm51DGPZ4MzmkigERYPVV_fEUcpGS._UjZ93byy84C4Z8b?continueMode=true&_x_zm_rtaid=5upEGBnLQ1y0Lo9FUvcfhQ.1642105886449.38f96f3577c62100fbf9b412b90ef670&_x_zm_rhtaid=270 Recording available here])<br />
<br />
*'''01/19 - [http://www.marcevanstein.com/ Marc Evanstein]'''<br />
<br />
*'''01/26 - Walker Davis & Alex Mitchell from [https://boomy.com/ boomy] '''<br />
<br />
*'''02/02 - Social Event and Student-Only Meeting '''<br />
<br />
*'''02/09 - [http://vibeke.info/ Vibeke Sorensen] '''<br />
<br />
*'''02/16 - Unofficial Social and Welcome to WasteLAnd!'''<br />
<br />
*'''02/23 - Break'''<br />
<br />
*'''03/02 - Rapid Fire and Conference Style Talks''' (Sign-ups are open!)<br />
** (Conference Style = 15 minutes, Rapid Fire = 5 minutes)<br />
** Speaker 1: Tamilore Awosile (10 mins)<br />
** Speaker 2: Frank Mondelli (Conference Style)<br />
** Speaker 3: Lloyd May (Rapid Fire)<br />
** Speaker 4: Julia Mills (Conference Style)<br />
** Speaker 5: Nima Farzaneh (Conference Style)<br />
<br />
*'''03/09 - CCRMA Town Hall'''<br />
<br />
*'''03/16 - Break'''<br />
<br />
= Past - Autumn Quarter (2021)=<br />
<br />
*'''9/22 New Student Introductions'''<br />
** Speaker 1: Kimia Koochakzadeh-Yazdi<br />
** Speaker 2: Taylor Goss<br />
** Speaker 3: Julia Mills<br />
** Speaker 4: Kiran Gandhi<br />
** Speaker 5: Dirk Roosenburg<br />
** Speaker 6: Aaron Hodges<br />
** Speaker 7: Nick Shaheed<br />
** Speaker 8: Nima Farzaneh<br />
** Speaker 9: Noah Berrie<br />
** Speaker 10: Angela Lee<br />
<br />
* '''9/29: [https://deutsch.ucsd.edu/ Diana Deutsch] || [https://stanford.zoom.us/rec/play/BqWvlQm3A56M40R7RacsoECfgLyDBceMAKCmv-reuiF9z2vqBe2zOQkuIhYcx5yas_qaI6rNwlJuEhHC.gNOkgLaCa1MoGAwy?continueMode=true&_x_zm_rtaid=pLyEfNNkT1ehEzQ0llwZ3w.1633020343192.a1088fb5aa65f3bfc1eec9f8cd66a807&_x_zm_rhtaid=200 Recording of talk] || [https://ccrma.stanford.edu/~cc/DDeutsch_CCRMA_Video_2021c.webm Pre-recorded lecture (Lightweight)] || [https://ccrma.stanford.edu/~cc/DDeutsch_CCRMA_Video_2021c.mov Pre-recorded lecture (Full-res)]''' <br />
<br />
*'''10/6 [https://stanford.zoom.us/rec/share/-dg0kA_9aJHEPYIahSBeG-5H4Vh9R6uT7M3qBfoBXEWQ69agHXeOyAgzBLskQoaF.rtlqEmUAd8b8p4tb Faculty/Staff Introductions Part 1]'''<br />
**Speaker 1: <br />
** Speaker 2: Matt<br />
** Speaker 3: Ge<br />
** Speaker 4: Takako<br />
** Speaker 5: [https://www.youtube.com/watch?v=rNGpbxAPU1c Julius]<br />
** Speaker 6: <br />
** Speaker 7: <br />
<br />
*'''10/13 Faculty/Staff Introductions Part 2'''<br />
** Speaker 1: Constantin<br />
** Speaker 2: Nando<br />
** Speaker 3: Jonathan (B)<br />
** Speaker 4: Jarek<br />
** Speaker 5: Nick<br />
** Speaker 6: Chris C<br />
** Speaker 7: Patricia<br />
** Speaker 8: Marina<br />
<br />
*'''10/20 - CCRMA Town Hall'''<br />
<br />
*'''10/27 ''' BREAK<br />
<br />
*'''11/03 ''' Social Event (TBD)<br />
<br />
*'''11/10 Rapid Fire & Conference Style '''(Sign-ups are open!)<br />
** (Conference Style = 15 minutes, Rapid Fire = 5 minutes)<br />
** Speaker 1: Lloyd May (Rapid Fire)<br />
** Speaker 2: Chris Chafe (Rapid Fire via zoom, assuming my internet works after the storm)<br />
** Speaker 3: Mark Rau (Rapid Fire)<br />
** Speaker 4: Champ Darabundit (Conference)<br />
** Speaker 5: Eleanor Selfridge-Field (Rapid Fire)<br />
** Speaker 6: Crag Stuart Sapp (Rapid Fire)<br />
<br />
*'''11/17 ''' BREAK<br />
<br />
*'''12/1 Special Guest Talks:''' Nils Tonnätt & Victoria Shen<br />
<br />
= Past - Spring Quarter (2021)=<br />
<br />
* '''3/31: Town Hall<br />
* '''4/07: CCRMA Open House Prep<br />
* '''4/14: <br />
* '''4/21: [[12pm]] - CCRMA Colloquium Phase Shift -1.88 degrees (social hang)<br />
* '''4/28: [http://www.avneeshsarwate.com/ Avneesh Sarwate]:''' Digital Audiovisual Interactive Media <br />
* '''5/05: Break! Rapid-Fire Talks Postponed to 5/19''' <br />
* '''5/12: [http://scattershot.org/ Jeff Snyder]:''' "Unusual Embedded Instruments"<br />
* '''5/19: Rapid Fire & Conference Style Talks''' - sign up here via your CCRMA login<br />
** '''Rapid Fire Signups (5 min)'''<br />
*** Speaker 1: CC waveguide mesh, part 2, realtime wavefield output<br />
*** Speaker 2: <br />
*** Speaker 3: <br />
*** Speaker 4: <br />
*** Speaker 5: Ge "ChucK: new features, new bugs, new worlds (ChucKTrip?)"<br />
*** Speaker 6: <br />
*** Speaker 7: <br />
*** Speaker 8: <br />
** '''Conference Style Signups (15 min)'''<br />
*** Speaker 1: Marise van Zyl (rapid fire)<br />
*** Speaker 2: Prateek Verma<br />
*** Speaker 3: Fernando Lopez-Lezcano<br />
* '''5/26: [https://www.decontextualize.com/ Allison Parrish]:''' Poet and Programmer<br />
* '''6/02: [http://sashaleitman.com/about/ Sasha Leitman]:''' Physical Interaction Design for Music<br />
<br />
= Past - Winter Quarter (2021)=<br />
<br />
* 1/13: Break<br />
* '''1/20: Informal Hangout / Dance Party<br />
* '''1/27: <br />
* '''2/03: <br />
* '''2/10: CCRMA Town !! <br />
*'''2/17: Rapid-Fire Talks''' (5 min) - sign up here via your CCRMA login <br />
** Speaker 1: Kunwoo Kim<br />
** Speaker 2: John Chowning<br />
** Speaker 3: Noah Fram<br />
** Speaker 4: Camille Noufi<br />
** Speaker 5: Barbara Nerness<br />
** Speaker 6: (maybe) Julie Zhu<br />
** Speaker 7: Chris Chafe<br />
** Speaker 8: Lloyd May<br />
** Speaker 9: Mike Mulshine<br />
** Speaker 10: Ge Wang<br />
** Speaker 11: Jatin (hopefully)<br />
** Speaker 12: Alex Chechile<br />
** Speaker 13: Fernando Lopez-Lezcano<br />
** Speaker 14:<br />
** Speaker 15:<br />
* '''2/24:<br />
* '''3/03: Conference Style Talks''' (15-20 min) - sign up here via your CCRMA login<br />
** Speaker 1: Ty Sadlier<br />
** Speaker 2: Travis Skare<br />
** Speaker 3: Constantin Basica & Prateek Verma<br />
** Speaker 4: <br />
* '''3/10: Sasha Leitman<br />
* '''3/17: Break<br />
<br />
= Past - Autumn Quarter (2020)=<br />
<span style="color:red">'''In person colloquiua will not be held for the 2020 Autumn Quarter. All events will be held remotely.<br />
<br />
*'''9/16 New Student Introductions'''<br />
** Speaker 1: Lloyd May<br />
** Speaker 2: Andrew Zhu<br />
** Speaker 3: Kathleen Yuan<br />
** Speaker 4: Marise van Zyl<br />
** Speaker 5: Hannah Choi<br />
** Speaker 6: Joss Saltzman<br />
** Speaker 7: Champ Darabundit<br />
** Speaker 8: Clara Allison<br />
** Speaker 9: David Braun<br />
** Speaker 10: Austin Zambito-Valente<br />
<br />
*'''9/23 Faculty/Staff Introductions'''<br />
**Speaker 1: Jonathan Berger<br />
** Speaker 2: Ge Wang<br />
** Speaker 3: Takako Fujioka<br />
** Speaker 4: Seán O Dalaigh (new DMA)<br />
** Speaker 5: Eleanor Selfridge-Field<br />
** Speaker 6: Craig Stuart Sapp<br />
** Speaker 7: Blair Kaneshiro<br />
<br />
*'''9/30 Faculty/Staff Introductions'''<br />
** Speaker 1: Patricia Alessandrini (via video)<br />
** Speaker 2: Julius Smith<br />
** Speaker 3: Marina Bosi<br />
** Speaker 4: Nando (aka Fernando Lopez-Lezcano)<br />
** Speaker 5: Stephanie Sherriff<br />
** Speaker 6: Constantin Basica<br />
** Speaker 7: Matt Wright<br />
** Speaker 8: Chris Chafe<br />
<br />
*10/7 - Break<br />
<br />
*'''10/14 - Town Hall'''<br />
<br />
*'''10/21 - Adjunct Faculty Talks'''<br />
** Speaker 1: Malcolm Slaney<br />
** Speaker 2: Poppy Crum<br />
** Speaker 3: Paul Demarinis<br />
** Speaker 4: Jonathan Abel<br />
** Speaker 5: Doug James<br />
<br />
*11/4 - Break<br />
<br />
*'''11/11 - [https://www.justinsalamon.com/ Justin Salamon (Adobe / NYU)] [https://vimeo.com/480670893 (Watch Again)]'''<br />
<br />
*'''11/18 - Mona Shahnavaz'''<br />
<br />
ABSTRACT & BIO:<br />
Mona is an enthusiastic musician, whose focus and passion has been to<br />
share the joy of music with others. In 2018, a successful outcome of<br />
her innovative music program designed for senior citizens was the<br />
turning point for her to decide to change the course of learning piano<br />
in a less complex route. Her engineering background helped her to<br />
start working on the idea that bridges the gap between music and<br />
technology.<br />
<br />
The approach to fingering in music has always been and still is one of<br />
the major elements of success for keyboard players. Correct fingering<br />
assists the performer in delivering a better technical and musical<br />
performance. This research presents the best technique to generate<br />
fingering for any sequence of music notes. Dynamic programming and<br />
mathematics are major parts of this paper, they work alongside rules<br />
set by pianists to calculate the most practical fingerings for any<br />
musical passage.<br />
<br />
The ultimate goal is to facilitate the process of playing the piano<br />
using an AR platform. This is helpful for scaling music instructors<br />
and allows for efficient teaching. Through solving this problem,<br />
virtual instructions would be more productive and impactful. Success<br />
of this research applied in the AR field can be applied to robotic<br />
tasks in educational programs, video games, and medical fields.<br />
<br />
*11/25 - THANKSGIVING WEEK - Break</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Colloquium&diff=24015Colloquium2023-01-07T00:16:45Z<p>Mulshine: /* Spring Quarter (2023) */</p>
<hr />
<div>'''Wednesday 5:30pm PT (CCRMA Classroom & Zoom)<br />
<br />
The CCRMA Colloquium is a weekly gathering of CCRMA students, faculty, staff, and guests. It is an opportunity for members of the CCRMA community and invited speakers to share the work that they are doing in the fields of Computer Music, Audio Signal Processing and Music Information Retrieval, Psychoacoustics, and related fields. The colloquium traditionally happens every Wednesday during the academic year from 5:30 &#8211; 7:00pm and meets in the CCRMA Classroom, Knoll 217, often also with a Zoom presence.<br />
<br />
Nette and Matt and this wiki page are organizing 2023 colloquia.<br />
<br />
<br />
= Winter Quarter (2023)=<br />
*'''1/11 (Week 1) - (TBD) Kanru Hua, CEO of Synthesizer V, Japan (https://dreamtonics.com/en/synthesizerv/)(Contact: MAMST Student Benny (Shicheng) Zhang<br />
*'''1/18 (Week 2) - Game and International Snacks Night (Nette & Kunwoo)<br />
*'''1/25 (Week 3) - Community-wide rapid-fire talks! ''' Share your recent thoughts, explorations, work, hobbies. Get to know what your peers are up to. Everyone is invited to share. <br />
**Speaker 1: Mike Mulshine<br />
**Speaker 2:<br />
**Speaker 3:<br />
**Speaker 4:<br />
**Speaker 5:<br />
**Speaker 6:<br />
**Speaker 7: <br />
**Speaker 8: <br />
**Speaker 9: <br />
**Speaker 10:<br />
**Speaker 11:<br />
**Speaker 12:<br />
<br />
*'''2/1 (Week 4) - TBD / Skills Share (Matt can organize)<br />
*'''2/8 (Week 5) - Laura Steenberge (Chris Chafe)<br />
*'''2/15 (Week 6) - Scott Oshiro [quantum music] / Mischa Dohler [Ericsson, low-latency XR] (Chris Chafe)<br />
*'''2/22 (Week 7) - Webchuck (Chris Chafe + lots of webchuck contributors)<br />
*'''3/1 (Week 8) - [tentative] [https://en.wikipedia.org/wiki/Annette_Vande_Gorne Annette Vande Gorne] (Constantin)<br />
*'''3/8 (Week 9) - [https://perbloland.com/ Per Bloland]<br />
*'''3/15 (Week 10) - [tentative] [https://www.jayafrisando.com Jay Afrisando] (Constantin)<br />
<br />
= Spring Quarter (2023)=<br />
*'''4/5 (Week 1) - [https://adamstanovic.com Adam Stanović] (Constantin)<br />
*'''4/12 (Week 2) - (tentative) Graham Wakefield ([https://artificialnature.net/#tab-artists Artificial Nature], [https://cycling74.com/books/go Generating Sound and Organizing Time]) (Matt)<br />
*'''4/19 (Week 3) - Matt Wright / TBD - how about having it in Bing while the GRAIL is set up?!?!?<br />
*'''4/26 (Week 4) - Aurie Hsu (Julia Mills) tentatively<br />
*'''5/3 (Week 5) - Conference-style talks. ''' This includes longer form presentations or lectures. Reach out to Mike (mulshine@stanford.edu) to sign up. <br />
*'''5/10 (Week 6) - TBD / Skills share (Matt can organize)<br />
*'''5/17 (Week 7) - TBD<br />
*'''5/24 (Week 8) - TBD<br />
*'''5/31 (Week 9) - TBD<br />
*'''6/7 (Week 10) - TBD<br />
<br /><br /><br /><br /><br />
<br />
= Past - Fall Quarter (2022)=<br />
<br />
*'''09/28 (Week 1) New Student Introductions<br />
**Speaker 1: Sneha Shah<br />
**Speaker 2: Josh Mitchell<br />
**Speaker 3: Emily Kuo<br />
**Speaker 4: Balazs & Truls<br />
**Speaker 5: Julia Yu<br />
**Speaker 6: Senyuan Fan<br />
**Speaker 7: Celeste Betancur<br />
**Speaker 8: Victoria Litton<br />
**Speaker 9: Yiheng Dong<br />
**Speaker 10 Eito Murakami<br />
**Speaker 11 Soohyun Kim<br />
**Speaker 12 Luna Valentin<br />
**Speaker 13 Terry Feng<br />
**Speaker 14 Alex Han<br />
**Speaker 15 Benny Zhang<br />
**Speaker 16 Sami Wurm<br />
**Speaker 17 Neha Rajagopalan<br />
<br />
*'''10/05 (Week 2) Faculty and Staff Rapid Fire<br />
**Speaker 1 Chris Chafe<br />
**Speaker 2 Ge Wang<br />
**Speaker 3 Patricia Alessandrini<br />
**Speaker 4 Marina Bosi<br />
**Speaker 5 Julius Smith<br />
**Speaker 6 Jarek Kapuściński<br />
**Speaker 7<br />
**Speaker 8<br />
**Speaker 9<br />
**Speaker 10 Mark Rau<br />
**Speaker 11<br />
**Speaker 12<br />
**Speaker 13<br />
**Speaker 14<br />
**Speaker 15<br />
<br />
*'''10/12 (Week 3) WINGS and Peer Mentoring in Music ''' (Mara Mills visit postponed to Spring)<br />
<br />
*'''10/19 (Week 4) Faculty and Staff Rapid Fire, part 2<br />
**Speaker 1 Takako Fujioka<br />
**Speaker 2 Eleanor Selfridge-Field<br />
**Speaker 3 Craig Stuart Sapp<br />
**Speaker 4 Craig Stuart Sapp<br />
**Speaker 5 Poppy Crum<br />
**Speaker 6 Jonathan Berger<br />
**Speaker 7 Matt Wright<br />
**Speaker 8 Fernando Lopez-Lezcano<br />
**Speaker 9 Constantin Basica<br />
**Speaker 10 Stephanie Sherriff<br />
**Speaker 11<br />
**Speaker 12<br />
**Speaker 13<br />
**Speaker 14<br />
**Speaker 15<br />
<br />
*'''10/26 (Week 5) Romain Michon, Tanguy Risset, Maxime Popoff : [https://ccrma.stanford.edu/events/high-level-programming-of-fpgas-audio-real-time-signal-processing-applications High-Level Programming of FPGAs for Audio Real-Time Signal Processing Applications]<br />
<br />
*'''11/2 (Week 6) - TBD<br />
<br />
*'''11/9 (Week 7) - Pizza & Pegagody: Assignments and Evaluations<br />
<br />
*'''11/16 (Week 8) - Student-only Town Hall<br />
<br />
*'''11/23 (Week 9) - NO SEMINAR (Thanksgiving break)<br />
<br />
*'''11/30 (Week 10) - planning session for Winter and Spring colloquium<br />
<br />
<br /><br />
<br />
= Spring Quarter (2022)=<br />
<br />
*'''03/30 - Spring Welcome Dinner at Treehouse'''<br />
*'''04/06 - In-House Project and Research Updates (Everyone is encouraged to present)'''<br />
*'''04/13 - BREAK'''<br />
*'''04/19* Tuesday - Installation by J. Mills'''<br />
*'''04/27 - [http://maramills.org/ Mara Mills] (Virtual Talk)'''<br />
*'''05/04 - Inclusive Teaching Workshop (Lloyd May & CTL)'''<br />
*'''05/11 - BREAK '''<br />
*'''05/18 - BREAK'''<br />
*'''05/25 - BREAK'''<br />
*'''06/01 - BREAK'''<br />
<br />
<br />
*''' Future Colloquiums already booked:'''<br />
*10/12 Mara Mills - History of PCM<br />
<br />
= Past - Winter Quarter (2022)=<br />
*'''01/05 - BREAK'''<br />
<br />
*'''01/12 - David Kanaga''' Composer/designer behind the games: [https://store.steampowered.com/app/223450/Dyad/ Dyad], [https://store.steampowered.com/app/219680/Proteus/ Proteus], [https://store.steampowered.com/app/284260/PANORAMICAL/ PANORAMICAL], & the podcast-opera [https://player.fm/series/soft-valkyrie Soft Valkyrie] ([https://stanford.zoom.us/rec/play/mieXwoFVZvaXBeyMXjVYfECaDWF1g-cPB6crtb7F4XDWLBZd9VQrm51DGPZ4MzmkigERYPVV_fEUcpGS._UjZ93byy84C4Z8b?continueMode=true&_x_zm_rtaid=5upEGBnLQ1y0Lo9FUvcfhQ.1642105886449.38f96f3577c62100fbf9b412b90ef670&_x_zm_rhtaid=270 Recording available here])<br />
<br />
*'''01/19 - [http://www.marcevanstein.com/ Marc Evanstein]'''<br />
<br />
*'''01/26 - Walker Davis & Alex Mitchell from [https://boomy.com/ boomy] '''<br />
<br />
*'''02/02 - Social Event and Student-Only Meeting '''<br />
<br />
*'''02/09 - [http://vibeke.info/ Vibeke Sorensen] '''<br />
<br />
*'''02/16 - Unofficial Social and Welcome to WasteLAnd!'''<br />
<br />
*'''02/23 - Break'''<br />
<br />
*'''03/02 - Rapid Fire and Conference Style Talks''' (Sign-ups are open!)<br />
** (Conference Style = 15 minutes, Rapid Fire = 5 minutes)<br />
** Speaker 1: Tamilore Awosile (10 mins)<br />
** Speaker 2: Frank Mondelli (Conference Style)<br />
** Speaker 3: Lloyd May (Rapid Fire)<br />
** Speaker 4: Julia Mills (Conference Style)<br />
** Speaker 5: Nima Farzaneh (Conference Style)<br />
<br />
*'''03/09 - CCRMA Town Hall'''<br />
<br />
*'''03/16 - Break'''<br />
<br />
= Past - Autumn Quarter (2021)=<br />
<br />
*'''9/22 New Student Introductions'''<br />
** Speaker 1: Kimia Koochakzadeh-Yazdi<br />
** Speaker 2: Taylor Goss<br />
** Speaker 3: Julia Mills<br />
** Speaker 4: Kiran Gandhi<br />
** Speaker 5: Dirk Roosenburg<br />
** Speaker 6: Aaron Hodges<br />
** Speaker 7: Nick Shaheed<br />
** Speaker 8: Nima Farzaneh<br />
** Speaker 9: Noah Berrie<br />
** Speaker 10: Angela Lee<br />
<br />
* '''9/29: [https://deutsch.ucsd.edu/ Diana Deutsch] || [https://stanford.zoom.us/rec/play/BqWvlQm3A56M40R7RacsoECfgLyDBceMAKCmv-reuiF9z2vqBe2zOQkuIhYcx5yas_qaI6rNwlJuEhHC.gNOkgLaCa1MoGAwy?continueMode=true&_x_zm_rtaid=pLyEfNNkT1ehEzQ0llwZ3w.1633020343192.a1088fb5aa65f3bfc1eec9f8cd66a807&_x_zm_rhtaid=200 Recording of talk] || [https://ccrma.stanford.edu/~cc/DDeutsch_CCRMA_Video_2021c.webm Pre-recorded lecture (Lightweight)] || [https://ccrma.stanford.edu/~cc/DDeutsch_CCRMA_Video_2021c.mov Pre-recorded lecture (Full-res)]''' <br />
<br />
*'''10/6 [https://stanford.zoom.us/rec/share/-dg0kA_9aJHEPYIahSBeG-5H4Vh9R6uT7M3qBfoBXEWQ69agHXeOyAgzBLskQoaF.rtlqEmUAd8b8p4tb Faculty/Staff Introductions Part 1]'''<br />
**Speaker 1: <br />
** Speaker 2: Matt<br />
** Speaker 3: Ge<br />
** Speaker 4: Takako<br />
** Speaker 5: [https://www.youtube.com/watch?v=rNGpbxAPU1c Julius]<br />
** Speaker 6: <br />
** Speaker 7: <br />
<br />
*'''10/13 Faculty/Staff Introductions Part 2'''<br />
** Speaker 1: Constantin<br />
** Speaker 2: Nando<br />
** Speaker 3: Jonathan (B)<br />
** Speaker 4: Jarek<br />
** Speaker 5: Nick<br />
** Speaker 6: Chris C<br />
** Speaker 7: Patricia<br />
** Speaker 8: Marina<br />
<br />
*'''10/20 - CCRMA Town Hall'''<br />
<br />
*'''10/27 ''' BREAK<br />
<br />
*'''11/03 ''' Social Event (TBD)<br />
<br />
*'''11/10 Rapid Fire & Conference Style '''(Sign-ups are open!)<br />
** (Conference Style = 15 minutes, Rapid Fire = 5 minutes)<br />
** Speaker 1: Lloyd May (Rapid Fire)<br />
** Speaker 2: Chris Chafe (Rapid Fire via zoom, assuming my internet works after the storm)<br />
** Speaker 3: Mark Rau (Rapid Fire)<br />
** Speaker 4: Champ Darabundit (Conference)<br />
** Speaker 5: Eleanor Selfridge-Field (Rapid Fire)<br />
** Speaker 6: Crag Stuart Sapp (Rapid Fire)<br />
<br />
*'''11/17 ''' BREAK<br />
<br />
*'''12/1 Special Guest Talks:''' Nils Tonnätt & Victoria Shen<br />
<br />
= Past - Spring Quarter (2021)=<br />
<br />
* '''3/31: Town Hall<br />
* '''4/07: CCRMA Open House Prep<br />
* '''4/14: <br />
* '''4/21: [[12pm]] - CCRMA Colloquium Phase Shift -1.88 degrees (social hang)<br />
* '''4/28: [http://www.avneeshsarwate.com/ Avneesh Sarwate]:''' Digital Audiovisual Interactive Media <br />
* '''5/05: Break! Rapid-Fire Talks Postponed to 5/19''' <br />
* '''5/12: [http://scattershot.org/ Jeff Snyder]:''' "Unusual Embedded Instruments"<br />
* '''5/19: Rapid Fire & Conference Style Talks''' - sign up here via your CCRMA login<br />
** '''Rapid Fire Signups (5 min)'''<br />
*** Speaker 1: CC waveguide mesh, part 2, realtime wavefield output<br />
*** Speaker 2: <br />
*** Speaker 3: <br />
*** Speaker 4: <br />
*** Speaker 5: Ge "ChucK: new features, new bugs, new worlds (ChucKTrip?)"<br />
*** Speaker 6: <br />
*** Speaker 7: <br />
*** Speaker 8: <br />
** '''Conference Style Signups (15 min)'''<br />
*** Speaker 1: Marise van Zyl (rapid fire)<br />
*** Speaker 2: Prateek Verma<br />
*** Speaker 3: Fernando Lopez-Lezcano<br />
* '''5/26: [https://www.decontextualize.com/ Allison Parrish]:''' Poet and Programmer<br />
* '''6/02: [http://sashaleitman.com/about/ Sasha Leitman]:''' Physical Interaction Design for Music<br />
<br />
= Past - Winter Quarter (2021)=<br />
<br />
* 1/13: Break<br />
* '''1/20: Informal Hangout / Dance Party<br />
* '''1/27: <br />
* '''2/03: <br />
* '''2/10: CCRMA Town !! <br />
*'''2/17: Rapid-Fire Talks''' (5 min) - sign up here via your CCRMA login <br />
** Speaker 1: Kunwoo Kim<br />
** Speaker 2: John Chowning<br />
** Speaker 3: Noah Fram<br />
** Speaker 4: Camille Noufi<br />
** Speaker 5: Barbara Nerness<br />
** Speaker 6: (maybe) Julie Zhu<br />
** Speaker 7: Chris Chafe<br />
** Speaker 8: Lloyd May<br />
** Speaker 9: Mike Mulshine<br />
** Speaker 10: Ge Wang<br />
** Speaker 11: Jatin (hopefully)<br />
** Speaker 12: Alex Chechile<br />
** Speaker 13: Fernando Lopez-Lezcano<br />
** Speaker 14:<br />
** Speaker 15:<br />
* '''2/24:<br />
* '''3/03: Conference Style Talks''' (15-20 min) - sign up here via your CCRMA login<br />
** Speaker 1: Ty Sadlier<br />
** Speaker 2: Travis Skare<br />
** Speaker 3: Constantin Basica & Prateek Verma<br />
** Speaker 4: <br />
* '''3/10: Sasha Leitman<br />
* '''3/17: Break<br />
<br />
= Past - Autumn Quarter (2020)=<br />
<span style="color:red">'''In person colloquiua will not be held for the 2020 Autumn Quarter. All events will be held remotely.<br />
<br />
*'''9/16 New Student Introductions'''<br />
** Speaker 1: Lloyd May<br />
** Speaker 2: Andrew Zhu<br />
** Speaker 3: Kathleen Yuan<br />
** Speaker 4: Marise van Zyl<br />
** Speaker 5: Hannah Choi<br />
** Speaker 6: Joss Saltzman<br />
** Speaker 7: Champ Darabundit<br />
** Speaker 8: Clara Allison<br />
** Speaker 9: David Braun<br />
** Speaker 10: Austin Zambito-Valente<br />
<br />
*'''9/23 Faculty/Staff Introductions'''<br />
**Speaker 1: Jonathan Berger<br />
** Speaker 2: Ge Wang<br />
** Speaker 3: Takako Fujioka<br />
** Speaker 4: Seán O Dalaigh (new DMA)<br />
** Speaker 5: Eleanor Selfridge-Field<br />
** Speaker 6: Craig Stuart Sapp<br />
** Speaker 7: Blair Kaneshiro<br />
<br />
*'''9/30 Faculty/Staff Introductions'''<br />
** Speaker 1: Patricia Alessandrini (via video)<br />
** Speaker 2: Julius Smith<br />
** Speaker 3: Marina Bosi<br />
** Speaker 4: Nando (aka Fernando Lopez-Lezcano)<br />
** Speaker 5: Stephanie Sherriff<br />
** Speaker 6: Constantin Basica<br />
** Speaker 7: Matt Wright<br />
** Speaker 8: Chris Chafe<br />
<br />
*10/7 - Break<br />
<br />
*'''10/14 - Town Hall'''<br />
<br />
*'''10/21 - Adjunct Faculty Talks'''<br />
** Speaker 1: Malcolm Slaney<br />
** Speaker 2: Poppy Crum<br />
** Speaker 3: Paul Demarinis<br />
** Speaker 4: Jonathan Abel<br />
** Speaker 5: Doug James<br />
<br />
*11/4 - Break<br />
<br />
*'''11/11 - [https://www.justinsalamon.com/ Justin Salamon (Adobe / NYU)] [https://vimeo.com/480670893 (Watch Again)]'''<br />
<br />
*'''11/18 - Mona Shahnavaz'''<br />
<br />
ABSTRACT & BIO:<br />
Mona is an enthusiastic musician, whose focus and passion has been to<br />
share the joy of music with others. In 2018, a successful outcome of<br />
her innovative music program designed for senior citizens was the<br />
turning point for her to decide to change the course of learning piano<br />
in a less complex route. Her engineering background helped her to<br />
start working on the idea that bridges the gap between music and<br />
technology.<br />
<br />
The approach to fingering in music has always been and still is one of<br />
the major elements of success for keyboard players. Correct fingering<br />
assists the performer in delivering a better technical and musical<br />
performance. This research presents the best technique to generate<br />
fingering for any sequence of music notes. Dynamic programming and<br />
mathematics are major parts of this paper, they work alongside rules<br />
set by pianists to calculate the most practical fingerings for any<br />
musical passage.<br />
<br />
The ultimate goal is to facilitate the process of playing the piano<br />
using an AR platform. This is helpful for scaling music instructors<br />
and allows for efficient teaching. Through solving this problem,<br />
virtual instructions would be more productive and impactful. Success<br />
of this research applied in the AR field can be applied to robotic<br />
tasks in educational programs, video games, and medical fields.<br />
<br />
*11/25 - THANKSGIVING WEEK - Break</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Colloquium&diff=24014Colloquium2023-01-07T00:15:54Z<p>Mulshine: </p>
<hr />
<div>'''Wednesday 5:30pm PT (CCRMA Classroom & Zoom)<br />
<br />
The CCRMA Colloquium is a weekly gathering of CCRMA students, faculty, staff, and guests. It is an opportunity for members of the CCRMA community and invited speakers to share the work that they are doing in the fields of Computer Music, Audio Signal Processing and Music Information Retrieval, Psychoacoustics, and related fields. The colloquium traditionally happens every Wednesday during the academic year from 5:30 &#8211; 7:00pm and meets in the CCRMA Classroom, Knoll 217, often also with a Zoom presence.<br />
<br />
Nette and Matt and this wiki page are organizing 2023 colloquia.<br />
<br />
<br />
= Winter Quarter (2023)=<br />
*'''1/11 (Week 1) - (TBD) Kanru Hua, CEO of Synthesizer V, Japan (https://dreamtonics.com/en/synthesizerv/)(Contact: MAMST Student Benny (Shicheng) Zhang<br />
*'''1/18 (Week 2) - Game and International Snacks Night (Nette & Kunwoo)<br />
*'''1/25 (Week 3) - Community-wide rapid-fire talks! ''' Share your recent thoughts, explorations, work, hobbies. Get to know what your peers are up to. Everyone is invited to share. <br />
**Speaker 1: Mike Mulshine<br />
**Speaker 2:<br />
**Speaker 3:<br />
**Speaker 4:<br />
**Speaker 5:<br />
**Speaker 6:<br />
**Speaker 7: <br />
**Speaker 8: <br />
**Speaker 9: <br />
**Speaker 10:<br />
**Speaker 11:<br />
**Speaker 12:<br />
<br />
*'''2/1 (Week 4) - TBD / Skills Share (Matt can organize)<br />
*'''2/8 (Week 5) - Laura Steenberge (Chris Chafe)<br />
*'''2/15 (Week 6) - Scott Oshiro [quantum music] / Mischa Dohler [Ericsson, low-latency XR] (Chris Chafe)<br />
*'''2/22 (Week 7) - Webchuck (Chris Chafe + lots of webchuck contributors)<br />
*'''3/1 (Week 8) - [tentative] [https://en.wikipedia.org/wiki/Annette_Vande_Gorne Annette Vande Gorne] (Constantin)<br />
*'''3/8 (Week 9) - [https://perbloland.com/ Per Bloland]<br />
*'''3/15 (Week 10) - [tentative] [https://www.jayafrisando.com Jay Afrisando] (Constantin)<br />
<br />
= Spring Quarter (2023)=<br />
*'''4/5 (Week 1) - [https://adamstanovic.com Adam Stanović] (Constantin)<br />
*'''4/12 (Week 2) - (tentative) Graham Wakefield ([https://artificialnature.net/#tab-artists Artificial Nature], [https://cycling74.com/books/go Generating Sound and Organizing Time]) (Matt)<br />
*'''4/19 (Week 3) - Matt Wright / TBD - how about having it in Bing while the GRAIL is set up?!?!?<br />
*'''4/26 (Week 4) - Aurie Hsu (Julia Mills) tentatively<br />
*'''5/3 (Week 5) - [tentative] Conference-style talks. ''' This includes longer form presentations or lectures. Reach out to Mike (mulshine@stanford.edu) to sign up. <br />
*'''5/10 (Week 6) - TBD / Skills share (Matt can organize)<br />
*'''5/17 (Week 7) - TBD<br />
*'''5/24 (Week 8) - TBD<br />
*'''5/31 (Week 9) - TBD<br />
*'''6/7 (Week 10) - TBD<br />
<br /><br /><br /><br /><br />
<br />
<br />
= Past - Fall Quarter (2022)=<br />
<br />
*'''09/28 (Week 1) New Student Introductions<br />
**Speaker 1: Sneha Shah<br />
**Speaker 2: Josh Mitchell<br />
**Speaker 3: Emily Kuo<br />
**Speaker 4: Balazs & Truls<br />
**Speaker 5: Julia Yu<br />
**Speaker 6: Senyuan Fan<br />
**Speaker 7: Celeste Betancur<br />
**Speaker 8: Victoria Litton<br />
**Speaker 9: Yiheng Dong<br />
**Speaker 10 Eito Murakami<br />
**Speaker 11 Soohyun Kim<br />
**Speaker 12 Luna Valentin<br />
**Speaker 13 Terry Feng<br />
**Speaker 14 Alex Han<br />
**Speaker 15 Benny Zhang<br />
**Speaker 16 Sami Wurm<br />
**Speaker 17 Neha Rajagopalan<br />
<br />
*'''10/05 (Week 2) Faculty and Staff Rapid Fire<br />
**Speaker 1 Chris Chafe<br />
**Speaker 2 Ge Wang<br />
**Speaker 3 Patricia Alessandrini<br />
**Speaker 4 Marina Bosi<br />
**Speaker 5 Julius Smith<br />
**Speaker 6 Jarek Kapuściński<br />
**Speaker 7<br />
**Speaker 8<br />
**Speaker 9<br />
**Speaker 10 Mark Rau<br />
**Speaker 11<br />
**Speaker 12<br />
**Speaker 13<br />
**Speaker 14<br />
**Speaker 15<br />
<br />
*'''10/12 (Week 3) WINGS and Peer Mentoring in Music ''' (Mara Mills visit postponed to Spring)<br />
<br />
*'''10/19 (Week 4) Faculty and Staff Rapid Fire, part 2<br />
**Speaker 1 Takako Fujioka<br />
**Speaker 2 Eleanor Selfridge-Field<br />
**Speaker 3 Craig Stuart Sapp<br />
**Speaker 4 Craig Stuart Sapp<br />
**Speaker 5 Poppy Crum<br />
**Speaker 6 Jonathan Berger<br />
**Speaker 7 Matt Wright<br />
**Speaker 8 Fernando Lopez-Lezcano<br />
**Speaker 9 Constantin Basica<br />
**Speaker 10 Stephanie Sherriff<br />
**Speaker 11<br />
**Speaker 12<br />
**Speaker 13<br />
**Speaker 14<br />
**Speaker 15<br />
<br />
*'''10/26 (Week 5) Romain Michon, Tanguy Risset, Maxime Popoff : [https://ccrma.stanford.edu/events/high-level-programming-of-fpgas-audio-real-time-signal-processing-applications High-Level Programming of FPGAs for Audio Real-Time Signal Processing Applications]<br />
<br />
*'''11/2 (Week 6) - TBD<br />
<br />
*'''11/9 (Week 7) - Pizza & Pegagody: Assignments and Evaluations<br />
<br />
*'''11/16 (Week 8) - Student-only Town Hall<br />
<br />
*'''11/23 (Week 9) - NO SEMINAR (Thanksgiving break)<br />
<br />
*'''11/30 (Week 10) - planning session for Winter and Spring colloquium<br />
<br />
<br /><br />
<br />
= Spring Quarter (2022)=<br />
<br />
*'''03/30 - Spring Welcome Dinner at Treehouse'''<br />
*'''04/06 - In-House Project and Research Updates (Everyone is encouraged to present)'''<br />
*'''04/13 - BREAK'''<br />
*'''04/19* Tuesday - Installation by J. Mills'''<br />
*'''04/27 - [http://maramills.org/ Mara Mills] (Virtual Talk)'''<br />
*'''05/04 - Inclusive Teaching Workshop (Lloyd May & CTL)'''<br />
*'''05/11 - BREAK '''<br />
*'''05/18 - BREAK'''<br />
*'''05/25 - BREAK'''<br />
*'''06/01 - BREAK'''<br />
<br />
<br />
*''' Future Colloquiums already booked:'''<br />
*10/12 Mara Mills - History of PCM<br />
<br />
= Past - Winter Quarter (2022)=<br />
*'''01/05 - BREAK'''<br />
<br />
*'''01/12 - David Kanaga''' Composer/designer behind the games: [https://store.steampowered.com/app/223450/Dyad/ Dyad], [https://store.steampowered.com/app/219680/Proteus/ Proteus], [https://store.steampowered.com/app/284260/PANORAMICAL/ PANORAMICAL], & the podcast-opera [https://player.fm/series/soft-valkyrie Soft Valkyrie] ([https://stanford.zoom.us/rec/play/mieXwoFVZvaXBeyMXjVYfECaDWF1g-cPB6crtb7F4XDWLBZd9VQrm51DGPZ4MzmkigERYPVV_fEUcpGS._UjZ93byy84C4Z8b?continueMode=true&_x_zm_rtaid=5upEGBnLQ1y0Lo9FUvcfhQ.1642105886449.38f96f3577c62100fbf9b412b90ef670&_x_zm_rhtaid=270 Recording available here])<br />
<br />
*'''01/19 - [http://www.marcevanstein.com/ Marc Evanstein]'''<br />
<br />
*'''01/26 - Walker Davis & Alex Mitchell from [https://boomy.com/ boomy] '''<br />
<br />
*'''02/02 - Social Event and Student-Only Meeting '''<br />
<br />
*'''02/09 - [http://vibeke.info/ Vibeke Sorensen] '''<br />
<br />
*'''02/16 - Unofficial Social and Welcome to WasteLAnd!'''<br />
<br />
*'''02/23 - Break'''<br />
<br />
*'''03/02 - Rapid Fire and Conference Style Talks''' (Sign-ups are open!)<br />
** (Conference Style = 15 minutes, Rapid Fire = 5 minutes)<br />
** Speaker 1: Tamilore Awosile (10 mins)<br />
** Speaker 2: Frank Mondelli (Conference Style)<br />
** Speaker 3: Lloyd May (Rapid Fire)<br />
** Speaker 4: Julia Mills (Conference Style)<br />
** Speaker 5: Nima Farzaneh (Conference Style)<br />
<br />
*'''03/09 - CCRMA Town Hall'''<br />
<br />
*'''03/16 - Break'''<br />
<br />
= Past - Autumn Quarter (2021)=<br />
<br />
*'''9/22 New Student Introductions'''<br />
** Speaker 1: Kimia Koochakzadeh-Yazdi<br />
** Speaker 2: Taylor Goss<br />
** Speaker 3: Julia Mills<br />
** Speaker 4: Kiran Gandhi<br />
** Speaker 5: Dirk Roosenburg<br />
** Speaker 6: Aaron Hodges<br />
** Speaker 7: Nick Shaheed<br />
** Speaker 8: Nima Farzaneh<br />
** Speaker 9: Noah Berrie<br />
** Speaker 10: Angela Lee<br />
<br />
* '''9/29: [https://deutsch.ucsd.edu/ Diana Deutsch] || [https://stanford.zoom.us/rec/play/BqWvlQm3A56M40R7RacsoECfgLyDBceMAKCmv-reuiF9z2vqBe2zOQkuIhYcx5yas_qaI6rNwlJuEhHC.gNOkgLaCa1MoGAwy?continueMode=true&_x_zm_rtaid=pLyEfNNkT1ehEzQ0llwZ3w.1633020343192.a1088fb5aa65f3bfc1eec9f8cd66a807&_x_zm_rhtaid=200 Recording of talk] || [https://ccrma.stanford.edu/~cc/DDeutsch_CCRMA_Video_2021c.webm Pre-recorded lecture (Lightweight)] || [https://ccrma.stanford.edu/~cc/DDeutsch_CCRMA_Video_2021c.mov Pre-recorded lecture (Full-res)]''' <br />
<br />
*'''10/6 [https://stanford.zoom.us/rec/share/-dg0kA_9aJHEPYIahSBeG-5H4Vh9R6uT7M3qBfoBXEWQ69agHXeOyAgzBLskQoaF.rtlqEmUAd8b8p4tb Faculty/Staff Introductions Part 1]'''<br />
**Speaker 1: <br />
** Speaker 2: Matt<br />
** Speaker 3: Ge<br />
** Speaker 4: Takako<br />
** Speaker 5: [https://www.youtube.com/watch?v=rNGpbxAPU1c Julius]<br />
** Speaker 6: <br />
** Speaker 7: <br />
<br />
*'''10/13 Faculty/Staff Introductions Part 2'''<br />
** Speaker 1: Constantin<br />
** Speaker 2: Nando<br />
** Speaker 3: Jonathan (B)<br />
** Speaker 4: Jarek<br />
** Speaker 5: Nick<br />
** Speaker 6: Chris C<br />
** Speaker 7: Patricia<br />
** Speaker 8: Marina<br />
<br />
*'''10/20 - CCRMA Town Hall'''<br />
<br />
*'''10/27 ''' BREAK<br />
<br />
*'''11/03 ''' Social Event (TBD)<br />
<br />
*'''11/10 Rapid Fire & Conference Style '''(Sign-ups are open!)<br />
** (Conference Style = 15 minutes, Rapid Fire = 5 minutes)<br />
** Speaker 1: Lloyd May (Rapid Fire)<br />
** Speaker 2: Chris Chafe (Rapid Fire via zoom, assuming my internet works after the storm)<br />
** Speaker 3: Mark Rau (Rapid Fire)<br />
** Speaker 4: Champ Darabundit (Conference)<br />
** Speaker 5: Eleanor Selfridge-Field (Rapid Fire)<br />
** Speaker 6: Crag Stuart Sapp (Rapid Fire)<br />
<br />
*'''11/17 ''' BREAK<br />
<br />
*'''12/1 Special Guest Talks:''' Nils Tonnätt & Victoria Shen<br />
<br />
= Past - Spring Quarter (2021)=<br />
<br />
* '''3/31: Town Hall<br />
* '''4/07: CCRMA Open House Prep<br />
* '''4/14: <br />
* '''4/21: [[12pm]] - CCRMA Colloquium Phase Shift -1.88 degrees (social hang)<br />
* '''4/28: [http://www.avneeshsarwate.com/ Avneesh Sarwate]:''' Digital Audiovisual Interactive Media <br />
* '''5/05: Break! Rapid-Fire Talks Postponed to 5/19''' <br />
* '''5/12: [http://scattershot.org/ Jeff Snyder]:''' "Unusual Embedded Instruments"<br />
* '''5/19: Rapid Fire & Conference Style Talks''' - sign up here via your CCRMA login<br />
** '''Rapid Fire Signups (5 min)'''<br />
*** Speaker 1: CC waveguide mesh, part 2, realtime wavefield output<br />
*** Speaker 2: <br />
*** Speaker 3: <br />
*** Speaker 4: <br />
*** Speaker 5: Ge "ChucK: new features, new bugs, new worlds (ChucKTrip?)"<br />
*** Speaker 6: <br />
*** Speaker 7: <br />
*** Speaker 8: <br />
** '''Conference Style Signups (15 min)'''<br />
*** Speaker 1: Marise van Zyl (rapid fire)<br />
*** Speaker 2: Prateek Verma<br />
*** Speaker 3: Fernando Lopez-Lezcano<br />
* '''5/26: [https://www.decontextualize.com/ Allison Parrish]:''' Poet and Programmer<br />
* '''6/02: [http://sashaleitman.com/about/ Sasha Leitman]:''' Physical Interaction Design for Music<br />
<br />
= Past - Winter Quarter (2021)=<br />
<br />
* 1/13: Break<br />
* '''1/20: Informal Hangout / Dance Party<br />
* '''1/27: <br />
* '''2/03: <br />
* '''2/10: CCRMA Town !! <br />
*'''2/17: Rapid-Fire Talks''' (5 min) - sign up here via your CCRMA login <br />
** Speaker 1: Kunwoo Kim<br />
** Speaker 2: John Chowning<br />
** Speaker 3: Noah Fram<br />
** Speaker 4: Camille Noufi<br />
** Speaker 5: Barbara Nerness<br />
** Speaker 6: (maybe) Julie Zhu<br />
** Speaker 7: Chris Chafe<br />
** Speaker 8: Lloyd May<br />
** Speaker 9: Mike Mulshine<br />
** Speaker 10: Ge Wang<br />
** Speaker 11: Jatin (hopefully)<br />
** Speaker 12: Alex Chechile<br />
** Speaker 13: Fernando Lopez-Lezcano<br />
** Speaker 14:<br />
** Speaker 15:<br />
* '''2/24:<br />
* '''3/03: Conference Style Talks''' (15-20 min) - sign up here via your CCRMA login<br />
** Speaker 1: Ty Sadlier<br />
** Speaker 2: Travis Skare<br />
** Speaker 3: Constantin Basica & Prateek Verma<br />
** Speaker 4: <br />
* '''3/10: Sasha Leitman<br />
* '''3/17: Break<br />
<br />
= Past - Autumn Quarter (2020)=<br />
<span style="color:red">'''In person colloquiua will not be held for the 2020 Autumn Quarter. All events will be held remotely.<br />
<br />
*'''9/16 New Student Introductions'''<br />
** Speaker 1: Lloyd May<br />
** Speaker 2: Andrew Zhu<br />
** Speaker 3: Kathleen Yuan<br />
** Speaker 4: Marise van Zyl<br />
** Speaker 5: Hannah Choi<br />
** Speaker 6: Joss Saltzman<br />
** Speaker 7: Champ Darabundit<br />
** Speaker 8: Clara Allison<br />
** Speaker 9: David Braun<br />
** Speaker 10: Austin Zambito-Valente<br />
<br />
*'''9/23 Faculty/Staff Introductions'''<br />
**Speaker 1: Jonathan Berger<br />
** Speaker 2: Ge Wang<br />
** Speaker 3: Takako Fujioka<br />
** Speaker 4: Seán O Dalaigh (new DMA)<br />
** Speaker 5: Eleanor Selfridge-Field<br />
** Speaker 6: Craig Stuart Sapp<br />
** Speaker 7: Blair Kaneshiro<br />
<br />
*'''9/30 Faculty/Staff Introductions'''<br />
** Speaker 1: Patricia Alessandrini (via video)<br />
** Speaker 2: Julius Smith<br />
** Speaker 3: Marina Bosi<br />
** Speaker 4: Nando (aka Fernando Lopez-Lezcano)<br />
** Speaker 5: Stephanie Sherriff<br />
** Speaker 6: Constantin Basica<br />
** Speaker 7: Matt Wright<br />
** Speaker 8: Chris Chafe<br />
<br />
*10/7 - Break<br />
<br />
*'''10/14 - Town Hall'''<br />
<br />
*'''10/21 - Adjunct Faculty Talks'''<br />
** Speaker 1: Malcolm Slaney<br />
** Speaker 2: Poppy Crum<br />
** Speaker 3: Paul Demarinis<br />
** Speaker 4: Jonathan Abel<br />
** Speaker 5: Doug James<br />
<br />
*11/4 - Break<br />
<br />
*'''11/11 - [https://www.justinsalamon.com/ Justin Salamon (Adobe / NYU)] [https://vimeo.com/480670893 (Watch Again)]'''<br />
<br />
*'''11/18 - Mona Shahnavaz'''<br />
<br />
ABSTRACT & BIO:<br />
Mona is an enthusiastic musician, whose focus and passion has been to<br />
share the joy of music with others. In 2018, a successful outcome of<br />
her innovative music program designed for senior citizens was the<br />
turning point for her to decide to change the course of learning piano<br />
in a less complex route. Her engineering background helped her to<br />
start working on the idea that bridges the gap between music and<br />
technology.<br />
<br />
The approach to fingering in music has always been and still is one of<br />
the major elements of success for keyboard players. Correct fingering<br />
assists the performer in delivering a better technical and musical<br />
performance. This research presents the best technique to generate<br />
fingering for any sequence of music notes. Dynamic programming and<br />
mathematics are major parts of this paper, they work alongside rules<br />
set by pianists to calculate the most practical fingerings for any<br />
musical passage.<br />
<br />
The ultimate goal is to facilitate the process of playing the piano<br />
using an AR platform. This is helpful for scaling music instructors<br />
and allows for efficient teaching. Through solving this problem,<br />
virtual instructions would be more productive and impactful. Success<br />
of this research applied in the AR field can be applied to robotic<br />
tasks in educational programs, video games, and medical fields.<br />
<br />
*11/25 - THANKSGIVING WEEK - Break</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Colloquium&diff=24005Colloquium2022-12-05T19:00:33Z<p>Mulshine: /* Spring Quarter (2023) */</p>
<hr />
<div>'''Wednesday 5:30pm PT (CCRMA Classroom & Zoom)<br />
<br />
The CCRMA Colloquium is a weekly gathering of CCRMA students, faculty, staff, and guests. It is an opportunity for members of the CCRMA community and invited speakers to share the work that they are doing in the fields of Computer Music, Audio Signal Processing and Music Information Retrieval, Psychoacoustics, and related fields. The colloquium traditionally happens every Wednesday during the academic year from 5:30 &#8211; 7:00pm and meets in the CCRMA Classroom, Knoll 217, often also with a Zoom presence.<br />
<br />
Nette and Matt and this wiki page are organizing 2023 colloquia.<br />
<br />
= Fall Quarter (2022)=<br />
<br />
*'''09/28 (Week 1) New Student Introductions<br />
**Speaker 1: Sneha Shah<br />
**Speaker 2: Josh Mitchell<br />
**Speaker 3: Emily Kuo<br />
**Speaker 4: Balazs & Truls<br />
**Speaker 5: Julia Yu<br />
**Speaker 6: Senyuan Fan<br />
**Speaker 7: Celeste Betancur<br />
**Speaker 8: Victoria Litton<br />
**Speaker 9: Yiheng Dong<br />
**Speaker 10 Eito Murakami<br />
**Speaker 11 Soohyun Kim<br />
**Speaker 12 Luna Valentin<br />
**Speaker 13 Terry Feng<br />
**Speaker 14 Alex Han<br />
**Speaker 15 Benny Zhang<br />
**Speaker 16 Sami Wurm<br />
**Speaker 17 Neha Rajagopalan<br />
<br />
*'''10/05 (Week 2) Faculty and Staff Rapid Fire<br />
**Speaker 1 Chris Chafe<br />
**Speaker 2 Ge Wang<br />
**Speaker 3 Patricia Alessandrini<br />
**Speaker 4 Marina Bosi<br />
**Speaker 5 Julius Smith<br />
**Speaker 6 Jarek Kapuściński<br />
**Speaker 7<br />
**Speaker 8<br />
**Speaker 9<br />
**Speaker 10 Mark Rau<br />
**Speaker 11<br />
**Speaker 12<br />
**Speaker 13<br />
**Speaker 14<br />
**Speaker 15<br />
<br />
*'''10/12 (Week 3) WINGS and Peer Mentoring in Music ''' (Mara Mills visit postponed to Spring)<br />
<br />
*'''10/19 (Week 4) Faculty and Staff Rapid Fire, part 2<br />
**Speaker 1 Takako Fujioka<br />
**Speaker 2 Eleanor Selfridge-Field<br />
**Speaker 3 Craig Stuart Sapp<br />
**Speaker 4 Craig Stuart Sapp<br />
**Speaker 5 Poppy Crum<br />
**Speaker 6 Jonathan Berger<br />
**Speaker 7 Matt Wright<br />
**Speaker 8 Fernando Lopez-Lezcano<br />
**Speaker 9 Constantin Basica<br />
**Speaker 10 Stephanie Sherriff<br />
**Speaker 11<br />
**Speaker 12<br />
**Speaker 13<br />
**Speaker 14<br />
**Speaker 15<br />
<br />
*'''10/26 (Week 5) Romain Michon, Tanguy Risset, Maxime Popoff : [https://ccrma.stanford.edu/events/high-level-programming-of-fpgas-audio-real-time-signal-processing-applications High-Level Programming of FPGAs for Audio Real-Time Signal Processing Applications]<br />
<br />
*'''11/2 (Week 6) - TBD<br />
<br />
*'''11/9 (Week 7) - Pizza & Pegagody: Assignments and Evaluations<br />
<br />
*'''11/16 (Week 8) - Student-only Town Hall<br />
<br />
*'''11/23 (Week 9) - NO SEMINAR (Thanksgiving break)<br />
<br />
*'''11/30 (Week 10) - planning session for Winter and Spring colloquium<br />
<br />
<br /><br />
<br />
= Winter Quarter (2023)=<br />
*'''1/11 (Week 1) - (TBD) Kanru Hua, CEO of Synthesizer V, Japan (https://dreamtonics.com/en/synthesizerv/)(Contact: MAMST Student Benny (Shicheng) Zhang<br />
*'''1/18 (Week 2) - Game and International Snacks Night (Nette & Kunwoo)<br />
*'''1/25 (Week 3) - [tentative] Community-wide rapid-fire talks! ''' Share your recent thoughts, explorations, work, hobbies. Get to know what your peers are up to. Everyone is invited to share. <br />
*'''2/1 (Week 4) - TBD<br />
*'''2/8 (Week 5) - Laura Steenberge (Chris Chafe)<br />
*'''2/15 (Week 6) - Scott Oshiro<br />
*'''2/22 (Week 7) - Webchuck (Chris Chafe + lots of webchuck contributors)<br />
*'''3/1 (Week 8) - [tentative] [https://en.wikipedia.org/wiki/Annette_Vande_Gorne Annette Vande Gorne] (Constantin)<br />
*'''3/8 (Week 9) - TBD<br />
*'''3/15 (Week 10) - [tentative] [https://www.jayafrisando.com Jay Afrisando] (Constantin)<br />
<br />
= Spring Quarter (2023)=<br />
*'''4/5 (Week 1) - [https://adamstanovic.com Adam Stanović] (Constantin)<br />
*'''4/12 (Week 2) - TBD<br />
*'''4/19 (Week 3) - Matt Wright / TBD - how about having it in Bing while the GRAIL is set up?!?!?<br />
*'''4/26 (Week 4) - Aurie Hsu (Julia Mills) tentatively<br />
*'''5/3 (Week 5) - [tentative] Conference-style talks. ''' This includes longer form presentations or lectures. Reach out to Mike (mulshine@stanford.edu) to sign up. <br />
*'''5/10 (Week 6) - <br />
*'''5/17 (Week 7) - TBD<br />
*'''5/24 (Week 8) - TBD<br />
*'''5/31 (Week 9) - TBD<br />
*'''6/7 (Week 10) - TBD<br />
<br /><br /><br /><br /><br />
<br />
= Past - Spring Quarter (2022)=<br />
<br />
*'''03/30 - Spring Welcome Dinner at Treehouse'''<br />
*'''04/06 - In-House Project and Research Updates (Everyone is encouraged to present)'''<br />
*'''04/13 - BREAK'''<br />
*'''04/19* Tuesday - Installation by J. Mills'''<br />
*'''04/27 - [http://maramills.org/ Mara Mills] (Virtual Talk)'''<br />
*'''05/04 - Inclusive Teaching Workshop (Lloyd May & CTL)'''<br />
*'''05/11 - BREAK '''<br />
*'''05/18 - BREAK'''<br />
*'''05/25 - BREAK'''<br />
*'''06/01 - BREAK'''<br />
<br />
<br />
*''' Future Colloquiums already booked:'''<br />
*10/12 Mara Mills - History of PCM<br />
<br />
= Past - Winter Quarter (2022)=<br />
*'''01/05 - BREAK'''<br />
<br />
*'''01/12 - David Kanaga''' Composer/designer behind the games: [https://store.steampowered.com/app/223450/Dyad/ Dyad], [https://store.steampowered.com/app/219680/Proteus/ Proteus], [https://store.steampowered.com/app/284260/PANORAMICAL/ PANORAMICAL], & the podcast-opera [https://player.fm/series/soft-valkyrie Soft Valkyrie] ([https://stanford.zoom.us/rec/play/mieXwoFVZvaXBeyMXjVYfECaDWF1g-cPB6crtb7F4XDWLBZd9VQrm51DGPZ4MzmkigERYPVV_fEUcpGS._UjZ93byy84C4Z8b?continueMode=true&_x_zm_rtaid=5upEGBnLQ1y0Lo9FUvcfhQ.1642105886449.38f96f3577c62100fbf9b412b90ef670&_x_zm_rhtaid=270 Recording available here])<br />
<br />
*'''01/19 - [http://www.marcevanstein.com/ Marc Evanstein]'''<br />
<br />
*'''01/26 - Walker Davis & Alex Mitchell from [https://boomy.com/ boomy] '''<br />
<br />
*'''02/02 - Social Event and Student-Only Meeting '''<br />
<br />
*'''02/09 - [http://vibeke.info/ Vibeke Sorensen] '''<br />
<br />
*'''02/16 - Unofficial Social and Welcome to WasteLAnd!'''<br />
<br />
*'''02/23 - Break'''<br />
<br />
*'''03/02 - Rapid Fire and Conference Style Talks''' (Sign-ups are open!)<br />
** (Conference Style = 15 minutes, Rapid Fire = 5 minutes)<br />
** Speaker 1: Tamilore Awosile (10 mins)<br />
** Speaker 2: Frank Mondelli (Conference Style)<br />
** Speaker 3: Lloyd May (Rapid Fire)<br />
** Speaker 4: Julia Mills (Conference Style)<br />
** Speaker 5: Nima Farzaneh (Conference Style)<br />
<br />
*'''03/09 - CCRMA Town Hall'''<br />
<br />
*'''03/16 - Break'''<br />
<br />
= Past - Autumn Quarter (2021)=<br />
<br />
*'''9/22 New Student Introductions'''<br />
** Speaker 1: Kimia Koochakzadeh-Yazdi<br />
** Speaker 2: Taylor Goss<br />
** Speaker 3: Julia Mills<br />
** Speaker 4: Kiran Gandhi<br />
** Speaker 5: Dirk Roosenburg<br />
** Speaker 6: Aaron Hodges<br />
** Speaker 7: Nick Shaheed<br />
** Speaker 8: Nima Farzaneh<br />
** Speaker 9: Noah Berrie<br />
** Speaker 10: Angela Lee<br />
<br />
* '''9/29: [https://deutsch.ucsd.edu/ Diana Deutsch] || [https://stanford.zoom.us/rec/play/BqWvlQm3A56M40R7RacsoECfgLyDBceMAKCmv-reuiF9z2vqBe2zOQkuIhYcx5yas_qaI6rNwlJuEhHC.gNOkgLaCa1MoGAwy?continueMode=true&_x_zm_rtaid=pLyEfNNkT1ehEzQ0llwZ3w.1633020343192.a1088fb5aa65f3bfc1eec9f8cd66a807&_x_zm_rhtaid=200 Recording of talk] || [https://ccrma.stanford.edu/~cc/DDeutsch_CCRMA_Video_2021c.webm Pre-recorded lecture (Lightweight)] || [https://ccrma.stanford.edu/~cc/DDeutsch_CCRMA_Video_2021c.mov Pre-recorded lecture (Full-res)]''' <br />
<br />
*'''10/6 [https://stanford.zoom.us/rec/share/-dg0kA_9aJHEPYIahSBeG-5H4Vh9R6uT7M3qBfoBXEWQ69agHXeOyAgzBLskQoaF.rtlqEmUAd8b8p4tb Faculty/Staff Introductions Part 1]'''<br />
**Speaker 1: <br />
** Speaker 2: Matt<br />
** Speaker 3: Ge<br />
** Speaker 4: Takako<br />
** Speaker 5: [https://www.youtube.com/watch?v=rNGpbxAPU1c Julius]<br />
** Speaker 6: <br />
** Speaker 7: <br />
<br />
*'''10/13 Faculty/Staff Introductions Part 2'''<br />
** Speaker 1: Constantin<br />
** Speaker 2: Nando<br />
** Speaker 3: Jonathan (B)<br />
** Speaker 4: Jarek<br />
** Speaker 5: Nick<br />
** Speaker 6: Chris C<br />
** Speaker 7: Patricia<br />
** Speaker 8: Marina<br />
<br />
*'''10/20 - CCRMA Town Hall'''<br />
<br />
*'''10/27 ''' BREAK<br />
<br />
*'''11/03 ''' Social Event (TBD)<br />
<br />
*'''11/10 Rapid Fire & Conference Style '''(Sign-ups are open!)<br />
** (Conference Style = 15 minutes, Rapid Fire = 5 minutes)<br />
** Speaker 1: Lloyd May (Rapid Fire)<br />
** Speaker 2: Chris Chafe (Rapid Fire via zoom, assuming my internet works after the storm)<br />
** Speaker 3: Mark Rau (Rapid Fire)<br />
** Speaker 4: Champ Darabundit (Conference)<br />
** Speaker 5: Eleanor Selfridge-Field (Rapid Fire)<br />
** Speaker 6: Crag Stuart Sapp (Rapid Fire)<br />
<br />
*'''11/17 ''' BREAK<br />
<br />
*'''12/1 Special Guest Talks:''' Nils Tonnätt & Victoria Shen<br />
<br />
= Past - Spring Quarter (2021)=<br />
<br />
* '''3/31: Town Hall<br />
* '''4/07: CCRMA Open House Prep<br />
* '''4/14: <br />
* '''4/21: [[12pm]] - CCRMA Colloquium Phase Shift -1.88 degrees (social hang)<br />
* '''4/28: [http://www.avneeshsarwate.com/ Avneesh Sarwate]:''' Digital Audiovisual Interactive Media <br />
* '''5/05: Break! Rapid-Fire Talks Postponed to 5/19''' <br />
* '''5/12: [http://scattershot.org/ Jeff Snyder]:''' "Unusual Embedded Instruments"<br />
* '''5/19: Rapid Fire & Conference Style Talks''' - sign up here via your CCRMA login<br />
** '''Rapid Fire Signups (5 min)'''<br />
*** Speaker 1: CC waveguide mesh, part 2, realtime wavefield output<br />
*** Speaker 2: <br />
*** Speaker 3: <br />
*** Speaker 4: <br />
*** Speaker 5: Ge "ChucK: new features, new bugs, new worlds (ChucKTrip?)"<br />
*** Speaker 6: <br />
*** Speaker 7: <br />
*** Speaker 8: <br />
** '''Conference Style Signups (15 min)'''<br />
*** Speaker 1: Marise van Zyl (rapid fire)<br />
*** Speaker 2: Prateek Verma<br />
*** Speaker 3: Fernando Lopez-Lezcano<br />
* '''5/26: [https://www.decontextualize.com/ Allison Parrish]:''' Poet and Programmer<br />
* '''6/02: [http://sashaleitman.com/about/ Sasha Leitman]:''' Physical Interaction Design for Music<br />
<br />
= Past - Winter Quarter (2021)=<br />
<br />
* 1/13: Break<br />
* '''1/20: Informal Hangout / Dance Party<br />
* '''1/27: <br />
* '''2/03: <br />
* '''2/10: CCRMA Town !! <br />
*'''2/17: Rapid-Fire Talks''' (5 min) - sign up here via your CCRMA login <br />
** Speaker 1: Kunwoo Kim<br />
** Speaker 2: John Chowning<br />
** Speaker 3: Noah Fram<br />
** Speaker 4: Camille Noufi<br />
** Speaker 5: Barbara Nerness<br />
** Speaker 6: (maybe) Julie Zhu<br />
** Speaker 7: Chris Chafe<br />
** Speaker 8: Lloyd May<br />
** Speaker 9: Mike Mulshine<br />
** Speaker 10: Ge Wang<br />
** Speaker 11: Jatin (hopefully)<br />
** Speaker 12: Alex Chechile<br />
** Speaker 13: Fernando Lopez-Lezcano<br />
** Speaker 14:<br />
** Speaker 15:<br />
* '''2/24:<br />
* '''3/03: Conference Style Talks''' (15-20 min) - sign up here via your CCRMA login<br />
** Speaker 1: Ty Sadlier<br />
** Speaker 2: Travis Skare<br />
** Speaker 3: Constantin Basica & Prateek Verma<br />
** Speaker 4: <br />
* '''3/10: Sasha Leitman<br />
* '''3/17: Break<br />
<br />
= Past - Autumn Quarter (2020)=<br />
<span style="color:red">'''In person colloquiua will not be held for the 2020 Autumn Quarter. All events will be held remotely.<br />
<br />
*'''9/16 New Student Introductions'''<br />
** Speaker 1: Lloyd May<br />
** Speaker 2: Andrew Zhu<br />
** Speaker 3: Kathleen Yuan<br />
** Speaker 4: Marise van Zyl<br />
** Speaker 5: Hannah Choi<br />
** Speaker 6: Joss Saltzman<br />
** Speaker 7: Champ Darabundit<br />
** Speaker 8: Clara Allison<br />
** Speaker 9: David Braun<br />
** Speaker 10: Austin Zambito-Valente<br />
<br />
*'''9/23 Faculty/Staff Introductions'''<br />
**Speaker 1: Jonathan Berger<br />
** Speaker 2: Ge Wang<br />
** Speaker 3: Takako Fujioka<br />
** Speaker 4: Seán O Dalaigh (new DMA)<br />
** Speaker 5: Eleanor Selfridge-Field<br />
** Speaker 6: Craig Stuart Sapp<br />
** Speaker 7: Blair Kaneshiro<br />
<br />
*'''9/30 Faculty/Staff Introductions'''<br />
** Speaker 1: Patricia Alessandrini (via video)<br />
** Speaker 2: Julius Smith<br />
** Speaker 3: Marina Bosi<br />
** Speaker 4: Nando (aka Fernando Lopez-Lezcano)<br />
** Speaker 5: Stephanie Sherriff<br />
** Speaker 6: Constantin Basica<br />
** Speaker 7: Matt Wright<br />
** Speaker 8: Chris Chafe<br />
<br />
*10/7 - Break<br />
<br />
*'''10/14 - Town Hall'''<br />
<br />
*'''10/21 - Adjunct Faculty Talks'''<br />
** Speaker 1: Malcolm Slaney<br />
** Speaker 2: Poppy Crum<br />
** Speaker 3: Paul Demarinis<br />
** Speaker 4: Jonathan Abel<br />
** Speaker 5: Doug James<br />
<br />
*11/4 - Break<br />
<br />
*'''11/11 - [https://www.justinsalamon.com/ Justin Salamon (Adobe / NYU)] [https://vimeo.com/480670893 (Watch Again)]'''<br />
<br />
*'''11/18 - Mona Shahnavaz'''<br />
<br />
ABSTRACT & BIO:<br />
Mona is an enthusiastic musician, whose focus and passion has been to<br />
share the joy of music with others. In 2018, a successful outcome of<br />
her innovative music program designed for senior citizens was the<br />
turning point for her to decide to change the course of learning piano<br />
in a less complex route. Her engineering background helped her to<br />
start working on the idea that bridges the gap between music and<br />
technology.<br />
<br />
The approach to fingering in music has always been and still is one of<br />
the major elements of success for keyboard players. Correct fingering<br />
assists the performer in delivering a better technical and musical<br />
performance. This research presents the best technique to generate<br />
fingering for any sequence of music notes. Dynamic programming and<br />
mathematics are major parts of this paper, they work alongside rules<br />
set by pianists to calculate the most practical fingerings for any<br />
musical passage.<br />
<br />
The ultimate goal is to facilitate the process of playing the piano<br />
using an AR platform. This is helpful for scaling music instructors<br />
and allows for efficient teaching. Through solving this problem,<br />
virtual instructions would be more productive and impactful. Success<br />
of this research applied in the AR field can be applied to robotic<br />
tasks in educational programs, video games, and medical fields.<br />
<br />
*11/25 - THANKSGIVING WEEK - Break</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Colloquium&diff=24004Colloquium2022-12-05T19:00:25Z<p>Mulshine: /* Winter Quarter (2023) */</p>
<hr />
<div>'''Wednesday 5:30pm PT (CCRMA Classroom & Zoom)<br />
<br />
The CCRMA Colloquium is a weekly gathering of CCRMA students, faculty, staff, and guests. It is an opportunity for members of the CCRMA community and invited speakers to share the work that they are doing in the fields of Computer Music, Audio Signal Processing and Music Information Retrieval, Psychoacoustics, and related fields. The colloquium traditionally happens every Wednesday during the academic year from 5:30 &#8211; 7:00pm and meets in the CCRMA Classroom, Knoll 217, often also with a Zoom presence.<br />
<br />
Nette and Matt and this wiki page are organizing 2023 colloquia.<br />
<br />
= Fall Quarter (2022)=<br />
<br />
*'''09/28 (Week 1) New Student Introductions<br />
**Speaker 1: Sneha Shah<br />
**Speaker 2: Josh Mitchell<br />
**Speaker 3: Emily Kuo<br />
**Speaker 4: Balazs & Truls<br />
**Speaker 5: Julia Yu<br />
**Speaker 6: Senyuan Fan<br />
**Speaker 7: Celeste Betancur<br />
**Speaker 8: Victoria Litton<br />
**Speaker 9: Yiheng Dong<br />
**Speaker 10 Eito Murakami<br />
**Speaker 11 Soohyun Kim<br />
**Speaker 12 Luna Valentin<br />
**Speaker 13 Terry Feng<br />
**Speaker 14 Alex Han<br />
**Speaker 15 Benny Zhang<br />
**Speaker 16 Sami Wurm<br />
**Speaker 17 Neha Rajagopalan<br />
<br />
*'''10/05 (Week 2) Faculty and Staff Rapid Fire<br />
**Speaker 1 Chris Chafe<br />
**Speaker 2 Ge Wang<br />
**Speaker 3 Patricia Alessandrini<br />
**Speaker 4 Marina Bosi<br />
**Speaker 5 Julius Smith<br />
**Speaker 6 Jarek Kapuściński<br />
**Speaker 7<br />
**Speaker 8<br />
**Speaker 9<br />
**Speaker 10 Mark Rau<br />
**Speaker 11<br />
**Speaker 12<br />
**Speaker 13<br />
**Speaker 14<br />
**Speaker 15<br />
<br />
*'''10/12 (Week 3) WINGS and Peer Mentoring in Music ''' (Mara Mills visit postponed to Spring)<br />
<br />
*'''10/19 (Week 4) Faculty and Staff Rapid Fire, part 2<br />
**Speaker 1 Takako Fujioka<br />
**Speaker 2 Eleanor Selfridge-Field<br />
**Speaker 3 Craig Stuart Sapp<br />
**Speaker 4 Craig Stuart Sapp<br />
**Speaker 5 Poppy Crum<br />
**Speaker 6 Jonathan Berger<br />
**Speaker 7 Matt Wright<br />
**Speaker 8 Fernando Lopez-Lezcano<br />
**Speaker 9 Constantin Basica<br />
**Speaker 10 Stephanie Sherriff<br />
**Speaker 11<br />
**Speaker 12<br />
**Speaker 13<br />
**Speaker 14<br />
**Speaker 15<br />
<br />
*'''10/26 (Week 5) Romain Michon, Tanguy Risset, Maxime Popoff : [https://ccrma.stanford.edu/events/high-level-programming-of-fpgas-audio-real-time-signal-processing-applications High-Level Programming of FPGAs for Audio Real-Time Signal Processing Applications]<br />
<br />
*'''11/2 (Week 6) - TBD<br />
<br />
*'''11/9 (Week 7) - Pizza & Pegagody: Assignments and Evaluations<br />
<br />
*'''11/16 (Week 8) - Student-only Town Hall<br />
<br />
*'''11/23 (Week 9) - NO SEMINAR (Thanksgiving break)<br />
<br />
*'''11/30 (Week 10) - planning session for Winter and Spring colloquium<br />
<br />
<br /><br />
<br />
= Winter Quarter (2023)=<br />
*'''1/11 (Week 1) - (TBD) Kanru Hua, CEO of Synthesizer V, Japan (https://dreamtonics.com/en/synthesizerv/)(Contact: MAMST Student Benny (Shicheng) Zhang<br />
*'''1/18 (Week 2) - Game and International Snacks Night (Nette & Kunwoo)<br />
*'''1/25 (Week 3) - [tentative] Community-wide rapid-fire talks! ''' Share your recent thoughts, explorations, work, hobbies. Get to know what your peers are up to. Everyone is invited to share. <br />
*'''2/1 (Week 4) - TBD<br />
*'''2/8 (Week 5) - Laura Steenberge (Chris Chafe)<br />
*'''2/15 (Week 6) - Scott Oshiro<br />
*'''2/22 (Week 7) - Webchuck (Chris Chafe + lots of webchuck contributors)<br />
*'''3/1 (Week 8) - [tentative] [https://en.wikipedia.org/wiki/Annette_Vande_Gorne Annette Vande Gorne] (Constantin)<br />
*'''3/8 (Week 9) - TBD<br />
*'''3/15 (Week 10) - [tentative] [https://www.jayafrisando.com Jay Afrisando] (Constantin)<br />
<br />
= Spring Quarter (2023)=<br />
*'''4/5 (Week 1) - [https://adamstanovic.com Adam Stanović] (Constantin)<br />
*'''4/12 (Week 2) - TBD<br />
*'''4/19 (Week 3) - Matt Wright / TBD - how about having it in Bing while the GRAIL is set up?!?!?<br />
*'''4/26 (Week 4) - Aurie Hsu (Julia Mills) tentatively<br />
*'''5/3 (Week 5) - [flexible] Conference-style talks. ''' This includes longer form presentations or lectures. Reach out to Mike (mulshine@stanford.edu) to sign up. <br />
*'''5/10 (Week 6) - <br />
*'''5/17 (Week 7) - TBD<br />
*'''5/24 (Week 8) - TBD<br />
*'''5/31 (Week 9) - TBD<br />
*'''6/7 (Week 10) - TBD<br />
<br /><br /><br /><br /><br />
<br />
= Past - Spring Quarter (2022)=<br />
<br />
*'''03/30 - Spring Welcome Dinner at Treehouse'''<br />
*'''04/06 - In-House Project and Research Updates (Everyone is encouraged to present)'''<br />
*'''04/13 - BREAK'''<br />
*'''04/19* Tuesday - Installation by J. Mills'''<br />
*'''04/27 - [http://maramills.org/ Mara Mills] (Virtual Talk)'''<br />
*'''05/04 - Inclusive Teaching Workshop (Lloyd May & CTL)'''<br />
*'''05/11 - BREAK '''<br />
*'''05/18 - BREAK'''<br />
*'''05/25 - BREAK'''<br />
*'''06/01 - BREAK'''<br />
<br />
<br />
*''' Future Colloquiums already booked:'''<br />
*10/12 Mara Mills - History of PCM<br />
<br />
= Past - Winter Quarter (2022)=<br />
*'''01/05 - BREAK'''<br />
<br />
*'''01/12 - David Kanaga''' Composer/designer behind the games: [https://store.steampowered.com/app/223450/Dyad/ Dyad], [https://store.steampowered.com/app/219680/Proteus/ Proteus], [https://store.steampowered.com/app/284260/PANORAMICAL/ PANORAMICAL], & the podcast-opera [https://player.fm/series/soft-valkyrie Soft Valkyrie] ([https://stanford.zoom.us/rec/play/mieXwoFVZvaXBeyMXjVYfECaDWF1g-cPB6crtb7F4XDWLBZd9VQrm51DGPZ4MzmkigERYPVV_fEUcpGS._UjZ93byy84C4Z8b?continueMode=true&_x_zm_rtaid=5upEGBnLQ1y0Lo9FUvcfhQ.1642105886449.38f96f3577c62100fbf9b412b90ef670&_x_zm_rhtaid=270 Recording available here])<br />
<br />
*'''01/19 - [http://www.marcevanstein.com/ Marc Evanstein]'''<br />
<br />
*'''01/26 - Walker Davis & Alex Mitchell from [https://boomy.com/ boomy] '''<br />
<br />
*'''02/02 - Social Event and Student-Only Meeting '''<br />
<br />
*'''02/09 - [http://vibeke.info/ Vibeke Sorensen] '''<br />
<br />
*'''02/16 - Unofficial Social and Welcome to WasteLAnd!'''<br />
<br />
*'''02/23 - Break'''<br />
<br />
*'''03/02 - Rapid Fire and Conference Style Talks''' (Sign-ups are open!)<br />
** (Conference Style = 15 minutes, Rapid Fire = 5 minutes)<br />
** Speaker 1: Tamilore Awosile (10 mins)<br />
** Speaker 2: Frank Mondelli (Conference Style)<br />
** Speaker 3: Lloyd May (Rapid Fire)<br />
** Speaker 4: Julia Mills (Conference Style)<br />
** Speaker 5: Nima Farzaneh (Conference Style)<br />
<br />
*'''03/09 - CCRMA Town Hall'''<br />
<br />
*'''03/16 - Break'''<br />
<br />
= Past - Autumn Quarter (2021)=<br />
<br />
*'''9/22 New Student Introductions'''<br />
** Speaker 1: Kimia Koochakzadeh-Yazdi<br />
** Speaker 2: Taylor Goss<br />
** Speaker 3: Julia Mills<br />
** Speaker 4: Kiran Gandhi<br />
** Speaker 5: Dirk Roosenburg<br />
** Speaker 6: Aaron Hodges<br />
** Speaker 7: Nick Shaheed<br />
** Speaker 8: Nima Farzaneh<br />
** Speaker 9: Noah Berrie<br />
** Speaker 10: Angela Lee<br />
<br />
* '''9/29: [https://deutsch.ucsd.edu/ Diana Deutsch] || [https://stanford.zoom.us/rec/play/BqWvlQm3A56M40R7RacsoECfgLyDBceMAKCmv-reuiF9z2vqBe2zOQkuIhYcx5yas_qaI6rNwlJuEhHC.gNOkgLaCa1MoGAwy?continueMode=true&_x_zm_rtaid=pLyEfNNkT1ehEzQ0llwZ3w.1633020343192.a1088fb5aa65f3bfc1eec9f8cd66a807&_x_zm_rhtaid=200 Recording of talk] || [https://ccrma.stanford.edu/~cc/DDeutsch_CCRMA_Video_2021c.webm Pre-recorded lecture (Lightweight)] || [https://ccrma.stanford.edu/~cc/DDeutsch_CCRMA_Video_2021c.mov Pre-recorded lecture (Full-res)]''' <br />
<br />
*'''10/6 [https://stanford.zoom.us/rec/share/-dg0kA_9aJHEPYIahSBeG-5H4Vh9R6uT7M3qBfoBXEWQ69agHXeOyAgzBLskQoaF.rtlqEmUAd8b8p4tb Faculty/Staff Introductions Part 1]'''<br />
**Speaker 1: <br />
** Speaker 2: Matt<br />
** Speaker 3: Ge<br />
** Speaker 4: Takako<br />
** Speaker 5: [https://www.youtube.com/watch?v=rNGpbxAPU1c Julius]<br />
** Speaker 6: <br />
** Speaker 7: <br />
<br />
*'''10/13 Faculty/Staff Introductions Part 2'''<br />
** Speaker 1: Constantin<br />
** Speaker 2: Nando<br />
** Speaker 3: Jonathan (B)<br />
** Speaker 4: Jarek<br />
** Speaker 5: Nick<br />
** Speaker 6: Chris C<br />
** Speaker 7: Patricia<br />
** Speaker 8: Marina<br />
<br />
*'''10/20 - CCRMA Town Hall'''<br />
<br />
*'''10/27 ''' BREAK<br />
<br />
*'''11/03 ''' Social Event (TBD)<br />
<br />
*'''11/10 Rapid Fire & Conference Style '''(Sign-ups are open!)<br />
** (Conference Style = 15 minutes, Rapid Fire = 5 minutes)<br />
** Speaker 1: Lloyd May (Rapid Fire)<br />
** Speaker 2: Chris Chafe (Rapid Fire via zoom, assuming my internet works after the storm)<br />
** Speaker 3: Mark Rau (Rapid Fire)<br />
** Speaker 4: Champ Darabundit (Conference)<br />
** Speaker 5: Eleanor Selfridge-Field (Rapid Fire)<br />
** Speaker 6: Crag Stuart Sapp (Rapid Fire)<br />
<br />
*'''11/17 ''' BREAK<br />
<br />
*'''12/1 Special Guest Talks:''' Nils Tonnätt & Victoria Shen<br />
<br />
= Past - Spring Quarter (2021)=<br />
<br />
* '''3/31: Town Hall<br />
* '''4/07: CCRMA Open House Prep<br />
* '''4/14: <br />
* '''4/21: [[12pm]] - CCRMA Colloquium Phase Shift -1.88 degrees (social hang)<br />
* '''4/28: [http://www.avneeshsarwate.com/ Avneesh Sarwate]:''' Digital Audiovisual Interactive Media <br />
* '''5/05: Break! Rapid-Fire Talks Postponed to 5/19''' <br />
* '''5/12: [http://scattershot.org/ Jeff Snyder]:''' "Unusual Embedded Instruments"<br />
* '''5/19: Rapid Fire & Conference Style Talks''' - sign up here via your CCRMA login<br />
** '''Rapid Fire Signups (5 min)'''<br />
*** Speaker 1: CC waveguide mesh, part 2, realtime wavefield output<br />
*** Speaker 2: <br />
*** Speaker 3: <br />
*** Speaker 4: <br />
*** Speaker 5: Ge "ChucK: new features, new bugs, new worlds (ChucKTrip?)"<br />
*** Speaker 6: <br />
*** Speaker 7: <br />
*** Speaker 8: <br />
** '''Conference Style Signups (15 min)'''<br />
*** Speaker 1: Marise van Zyl (rapid fire)<br />
*** Speaker 2: Prateek Verma<br />
*** Speaker 3: Fernando Lopez-Lezcano<br />
* '''5/26: [https://www.decontextualize.com/ Allison Parrish]:''' Poet and Programmer<br />
* '''6/02: [http://sashaleitman.com/about/ Sasha Leitman]:''' Physical Interaction Design for Music<br />
<br />
= Past - Winter Quarter (2021)=<br />
<br />
* 1/13: Break<br />
* '''1/20: Informal Hangout / Dance Party<br />
* '''1/27: <br />
* '''2/03: <br />
* '''2/10: CCRMA Town !! <br />
*'''2/17: Rapid-Fire Talks''' (5 min) - sign up here via your CCRMA login <br />
** Speaker 1: Kunwoo Kim<br />
** Speaker 2: John Chowning<br />
** Speaker 3: Noah Fram<br />
** Speaker 4: Camille Noufi<br />
** Speaker 5: Barbara Nerness<br />
** Speaker 6: (maybe) Julie Zhu<br />
** Speaker 7: Chris Chafe<br />
** Speaker 8: Lloyd May<br />
** Speaker 9: Mike Mulshine<br />
** Speaker 10: Ge Wang<br />
** Speaker 11: Jatin (hopefully)<br />
** Speaker 12: Alex Chechile<br />
** Speaker 13: Fernando Lopez-Lezcano<br />
** Speaker 14:<br />
** Speaker 15:<br />
* '''2/24:<br />
* '''3/03: Conference Style Talks''' (15-20 min) - sign up here via your CCRMA login<br />
** Speaker 1: Ty Sadlier<br />
** Speaker 2: Travis Skare<br />
** Speaker 3: Constantin Basica & Prateek Verma<br />
** Speaker 4: <br />
* '''3/10: Sasha Leitman<br />
* '''3/17: Break<br />
<br />
= Past - Autumn Quarter (2020)=<br />
<span style="color:red">'''In person colloquiua will not be held for the 2020 Autumn Quarter. All events will be held remotely.<br />
<br />
*'''9/16 New Student Introductions'''<br />
** Speaker 1: Lloyd May<br />
** Speaker 2: Andrew Zhu<br />
** Speaker 3: Kathleen Yuan<br />
** Speaker 4: Marise van Zyl<br />
** Speaker 5: Hannah Choi<br />
** Speaker 6: Joss Saltzman<br />
** Speaker 7: Champ Darabundit<br />
** Speaker 8: Clara Allison<br />
** Speaker 9: David Braun<br />
** Speaker 10: Austin Zambito-Valente<br />
<br />
*'''9/23 Faculty/Staff Introductions'''<br />
**Speaker 1: Jonathan Berger<br />
** Speaker 2: Ge Wang<br />
** Speaker 3: Takako Fujioka<br />
** Speaker 4: Seán O Dalaigh (new DMA)<br />
** Speaker 5: Eleanor Selfridge-Field<br />
** Speaker 6: Craig Stuart Sapp<br />
** Speaker 7: Blair Kaneshiro<br />
<br />
*'''9/30 Faculty/Staff Introductions'''<br />
** Speaker 1: Patricia Alessandrini (via video)<br />
** Speaker 2: Julius Smith<br />
** Speaker 3: Marina Bosi<br />
** Speaker 4: Nando (aka Fernando Lopez-Lezcano)<br />
** Speaker 5: Stephanie Sherriff<br />
** Speaker 6: Constantin Basica<br />
** Speaker 7: Matt Wright<br />
** Speaker 8: Chris Chafe<br />
<br />
*10/7 - Break<br />
<br />
*'''10/14 - Town Hall'''<br />
<br />
*'''10/21 - Adjunct Faculty Talks'''<br />
** Speaker 1: Malcolm Slaney<br />
** Speaker 2: Poppy Crum<br />
** Speaker 3: Paul Demarinis<br />
** Speaker 4: Jonathan Abel<br />
** Speaker 5: Doug James<br />
<br />
*11/4 - Break<br />
<br />
*'''11/11 - [https://www.justinsalamon.com/ Justin Salamon (Adobe / NYU)] [https://vimeo.com/480670893 (Watch Again)]'''<br />
<br />
*'''11/18 - Mona Shahnavaz'''<br />
<br />
ABSTRACT & BIO:<br />
Mona is an enthusiastic musician, whose focus and passion has been to<br />
share the joy of music with others. In 2018, a successful outcome of<br />
her innovative music program designed for senior citizens was the<br />
turning point for her to decide to change the course of learning piano<br />
in a less complex route. Her engineering background helped her to<br />
start working on the idea that bridges the gap between music and<br />
technology.<br />
<br />
The approach to fingering in music has always been and still is one of<br />
the major elements of success for keyboard players. Correct fingering<br />
assists the performer in delivering a better technical and musical<br />
performance. This research presents the best technique to generate<br />
fingering for any sequence of music notes. Dynamic programming and<br />
mathematics are major parts of this paper, they work alongside rules<br />
set by pianists to calculate the most practical fingerings for any<br />
musical passage.<br />
<br />
The ultimate goal is to facilitate the process of playing the piano<br />
using an AR platform. This is helpful for scaling music instructors<br />
and allows for efficient teaching. Through solving this problem,<br />
virtual instructions would be more productive and impactful. Success<br />
of this research applied in the AR field can be applied to robotic<br />
tasks in educational programs, video games, and medical fields.<br />
<br />
*11/25 - THANKSGIVING WEEK - Break</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Colloquium&diff=24003Colloquium2022-12-05T19:00:05Z<p>Mulshine: /* Spring Quarter (2023) */</p>
<hr />
<div>'''Wednesday 5:30pm PT (CCRMA Classroom & Zoom)<br />
<br />
The CCRMA Colloquium is a weekly gathering of CCRMA students, faculty, staff, and guests. It is an opportunity for members of the CCRMA community and invited speakers to share the work that they are doing in the fields of Computer Music, Audio Signal Processing and Music Information Retrieval, Psychoacoustics, and related fields. The colloquium traditionally happens every Wednesday during the academic year from 5:30 &#8211; 7:00pm and meets in the CCRMA Classroom, Knoll 217, often also with a Zoom presence.<br />
<br />
Nette and Matt and this wiki page are organizing 2023 colloquia.<br />
<br />
= Fall Quarter (2022)=<br />
<br />
*'''09/28 (Week 1) New Student Introductions<br />
**Speaker 1: Sneha Shah<br />
**Speaker 2: Josh Mitchell<br />
**Speaker 3: Emily Kuo<br />
**Speaker 4: Balazs & Truls<br />
**Speaker 5: Julia Yu<br />
**Speaker 6: Senyuan Fan<br />
**Speaker 7: Celeste Betancur<br />
**Speaker 8: Victoria Litton<br />
**Speaker 9: Yiheng Dong<br />
**Speaker 10 Eito Murakami<br />
**Speaker 11 Soohyun Kim<br />
**Speaker 12 Luna Valentin<br />
**Speaker 13 Terry Feng<br />
**Speaker 14 Alex Han<br />
**Speaker 15 Benny Zhang<br />
**Speaker 16 Sami Wurm<br />
**Speaker 17 Neha Rajagopalan<br />
<br />
*'''10/05 (Week 2) Faculty and Staff Rapid Fire<br />
**Speaker 1 Chris Chafe<br />
**Speaker 2 Ge Wang<br />
**Speaker 3 Patricia Alessandrini<br />
**Speaker 4 Marina Bosi<br />
**Speaker 5 Julius Smith<br />
**Speaker 6 Jarek Kapuściński<br />
**Speaker 7<br />
**Speaker 8<br />
**Speaker 9<br />
**Speaker 10 Mark Rau<br />
**Speaker 11<br />
**Speaker 12<br />
**Speaker 13<br />
**Speaker 14<br />
**Speaker 15<br />
<br />
*'''10/12 (Week 3) WINGS and Peer Mentoring in Music ''' (Mara Mills visit postponed to Spring)<br />
<br />
*'''10/19 (Week 4) Faculty and Staff Rapid Fire, part 2<br />
**Speaker 1 Takako Fujioka<br />
**Speaker 2 Eleanor Selfridge-Field<br />
**Speaker 3 Craig Stuart Sapp<br />
**Speaker 4 Craig Stuart Sapp<br />
**Speaker 5 Poppy Crum<br />
**Speaker 6 Jonathan Berger<br />
**Speaker 7 Matt Wright<br />
**Speaker 8 Fernando Lopez-Lezcano<br />
**Speaker 9 Constantin Basica<br />
**Speaker 10 Stephanie Sherriff<br />
**Speaker 11<br />
**Speaker 12<br />
**Speaker 13<br />
**Speaker 14<br />
**Speaker 15<br />
<br />
*'''10/26 (Week 5) Romain Michon, Tanguy Risset, Maxime Popoff : [https://ccrma.stanford.edu/events/high-level-programming-of-fpgas-audio-real-time-signal-processing-applications High-Level Programming of FPGAs for Audio Real-Time Signal Processing Applications]<br />
<br />
*'''11/2 (Week 6) - TBD<br />
<br />
*'''11/9 (Week 7) - Pizza & Pegagody: Assignments and Evaluations<br />
<br />
*'''11/16 (Week 8) - Student-only Town Hall<br />
<br />
*'''11/23 (Week 9) - NO SEMINAR (Thanksgiving break)<br />
<br />
*'''11/30 (Week 10) - planning session for Winter and Spring colloquium<br />
<br />
<br /><br />
<br />
= Winter Quarter (2023)=<br />
*'''1/11 (Week 1) - (TBD) Kanru Hua, CEO of Synthesizer V, Japan (https://dreamtonics.com/en/synthesizerv/)(Contact: MAMST Student Benny (Shicheng) Zhang<br />
*'''1/18 (Week 2) - Game and International Snacks Night (Nette & Kunwoo)<br />
*'''1/25 (Week 3) - Community-wide rapid-fire talks! ''' Share your recent thoughts, explorations, work, hobbies. Get to know what your peers are up to. Everyone is invited to share. <br />
*'''2/1 (Week 4) - TBD<br />
*'''2/8 (Week 5) - Laura Steenberge (Chris Chafe)<br />
*'''2/15 (Week 6) - Scott Oshiro<br />
*'''2/22 (Week 7) - Webchuck (Chris Chafe + lots of webchuck contributors)<br />
*'''3/1 (Week 8) - [tentative] [https://en.wikipedia.org/wiki/Annette_Vande_Gorne Annette Vande Gorne] (Constantin)<br />
*'''3/8 (Week 9) - TBD<br />
*'''3/15 (Week 10) - [tentative] [https://www.jayafrisando.com Jay Afrisando] (Constantin)<br />
<br />
= Spring Quarter (2023)=<br />
*'''4/5 (Week 1) - [https://adamstanovic.com Adam Stanović] (Constantin)<br />
*'''4/12 (Week 2) - TBD<br />
*'''4/19 (Week 3) - Matt Wright / TBD - how about having it in Bing while the GRAIL is set up?!?!?<br />
*'''4/26 (Week 4) - Aurie Hsu (Julia Mills) tentatively<br />
*'''5/3 (Week 5) - [flexible] Conference-style talks. ''' This includes longer form presentations or lectures. Reach out to Mike (mulshine@stanford.edu) to sign up. <br />
*'''5/10 (Week 6) - <br />
*'''5/17 (Week 7) - TBD<br />
*'''5/24 (Week 8) - TBD<br />
*'''5/31 (Week 9) - TBD<br />
*'''6/7 (Week 10) - TBD<br />
<br /><br /><br /><br /><br />
<br />
= Past - Spring Quarter (2022)=<br />
<br />
*'''03/30 - Spring Welcome Dinner at Treehouse'''<br />
*'''04/06 - In-House Project and Research Updates (Everyone is encouraged to present)'''<br />
*'''04/13 - BREAK'''<br />
*'''04/19* Tuesday - Installation by J. Mills'''<br />
*'''04/27 - [http://maramills.org/ Mara Mills] (Virtual Talk)'''<br />
*'''05/04 - Inclusive Teaching Workshop (Lloyd May & CTL)'''<br />
*'''05/11 - BREAK '''<br />
*'''05/18 - BREAK'''<br />
*'''05/25 - BREAK'''<br />
*'''06/01 - BREAK'''<br />
<br />
<br />
*''' Future Colloquiums already booked:'''<br />
*10/12 Mara Mills - History of PCM<br />
<br />
= Past - Winter Quarter (2022)=<br />
*'''01/05 - BREAK'''<br />
<br />
*'''01/12 - David Kanaga''' Composer/designer behind the games: [https://store.steampowered.com/app/223450/Dyad/ Dyad], [https://store.steampowered.com/app/219680/Proteus/ Proteus], [https://store.steampowered.com/app/284260/PANORAMICAL/ PANORAMICAL], & the podcast-opera [https://player.fm/series/soft-valkyrie Soft Valkyrie] ([https://stanford.zoom.us/rec/play/mieXwoFVZvaXBeyMXjVYfECaDWF1g-cPB6crtb7F4XDWLBZd9VQrm51DGPZ4MzmkigERYPVV_fEUcpGS._UjZ93byy84C4Z8b?continueMode=true&_x_zm_rtaid=5upEGBnLQ1y0Lo9FUvcfhQ.1642105886449.38f96f3577c62100fbf9b412b90ef670&_x_zm_rhtaid=270 Recording available here])<br />
<br />
*'''01/19 - [http://www.marcevanstein.com/ Marc Evanstein]'''<br />
<br />
*'''01/26 - Walker Davis & Alex Mitchell from [https://boomy.com/ boomy] '''<br />
<br />
*'''02/02 - Social Event and Student-Only Meeting '''<br />
<br />
*'''02/09 - [http://vibeke.info/ Vibeke Sorensen] '''<br />
<br />
*'''02/16 - Unofficial Social and Welcome to WasteLAnd!'''<br />
<br />
*'''02/23 - Break'''<br />
<br />
*'''03/02 - Rapid Fire and Conference Style Talks''' (Sign-ups are open!)<br />
** (Conference Style = 15 minutes, Rapid Fire = 5 minutes)<br />
** Speaker 1: Tamilore Awosile (10 mins)<br />
** Speaker 2: Frank Mondelli (Conference Style)<br />
** Speaker 3: Lloyd May (Rapid Fire)<br />
** Speaker 4: Julia Mills (Conference Style)<br />
** Speaker 5: Nima Farzaneh (Conference Style)<br />
<br />
*'''03/09 - CCRMA Town Hall'''<br />
<br />
*'''03/16 - Break'''<br />
<br />
= Past - Autumn Quarter (2021)=<br />
<br />
*'''9/22 New Student Introductions'''<br />
** Speaker 1: Kimia Koochakzadeh-Yazdi<br />
** Speaker 2: Taylor Goss<br />
** Speaker 3: Julia Mills<br />
** Speaker 4: Kiran Gandhi<br />
** Speaker 5: Dirk Roosenburg<br />
** Speaker 6: Aaron Hodges<br />
** Speaker 7: Nick Shaheed<br />
** Speaker 8: Nima Farzaneh<br />
** Speaker 9: Noah Berrie<br />
** Speaker 10: Angela Lee<br />
<br />
* '''9/29: [https://deutsch.ucsd.edu/ Diana Deutsch] || [https://stanford.zoom.us/rec/play/BqWvlQm3A56M40R7RacsoECfgLyDBceMAKCmv-reuiF9z2vqBe2zOQkuIhYcx5yas_qaI6rNwlJuEhHC.gNOkgLaCa1MoGAwy?continueMode=true&_x_zm_rtaid=pLyEfNNkT1ehEzQ0llwZ3w.1633020343192.a1088fb5aa65f3bfc1eec9f8cd66a807&_x_zm_rhtaid=200 Recording of talk] || [https://ccrma.stanford.edu/~cc/DDeutsch_CCRMA_Video_2021c.webm Pre-recorded lecture (Lightweight)] || [https://ccrma.stanford.edu/~cc/DDeutsch_CCRMA_Video_2021c.mov Pre-recorded lecture (Full-res)]''' <br />
<br />
*'''10/6 [https://stanford.zoom.us/rec/share/-dg0kA_9aJHEPYIahSBeG-5H4Vh9R6uT7M3qBfoBXEWQ69agHXeOyAgzBLskQoaF.rtlqEmUAd8b8p4tb Faculty/Staff Introductions Part 1]'''<br />
**Speaker 1: <br />
** Speaker 2: Matt<br />
** Speaker 3: Ge<br />
** Speaker 4: Takako<br />
** Speaker 5: [https://www.youtube.com/watch?v=rNGpbxAPU1c Julius]<br />
** Speaker 6: <br />
** Speaker 7: <br />
<br />
*'''10/13 Faculty/Staff Introductions Part 2'''<br />
** Speaker 1: Constantin<br />
** Speaker 2: Nando<br />
** Speaker 3: Jonathan (B)<br />
** Speaker 4: Jarek<br />
** Speaker 5: Nick<br />
** Speaker 6: Chris C<br />
** Speaker 7: Patricia<br />
** Speaker 8: Marina<br />
<br />
*'''10/20 - CCRMA Town Hall'''<br />
<br />
*'''10/27 ''' BREAK<br />
<br />
*'''11/03 ''' Social Event (TBD)<br />
<br />
*'''11/10 Rapid Fire & Conference Style '''(Sign-ups are open!)<br />
** (Conference Style = 15 minutes, Rapid Fire = 5 minutes)<br />
** Speaker 1: Lloyd May (Rapid Fire)<br />
** Speaker 2: Chris Chafe (Rapid Fire via zoom, assuming my internet works after the storm)<br />
** Speaker 3: Mark Rau (Rapid Fire)<br />
** Speaker 4: Champ Darabundit (Conference)<br />
** Speaker 5: Eleanor Selfridge-Field (Rapid Fire)<br />
** Speaker 6: Crag Stuart Sapp (Rapid Fire)<br />
<br />
*'''11/17 ''' BREAK<br />
<br />
*'''12/1 Special Guest Talks:''' Nils Tonnätt & Victoria Shen<br />
<br />
= Past - Spring Quarter (2021)=<br />
<br />
* '''3/31: Town Hall<br />
* '''4/07: CCRMA Open House Prep<br />
* '''4/14: <br />
* '''4/21: [[12pm]] - CCRMA Colloquium Phase Shift -1.88 degrees (social hang)<br />
* '''4/28: [http://www.avneeshsarwate.com/ Avneesh Sarwate]:''' Digital Audiovisual Interactive Media <br />
* '''5/05: Break! Rapid-Fire Talks Postponed to 5/19''' <br />
* '''5/12: [http://scattershot.org/ Jeff Snyder]:''' "Unusual Embedded Instruments"<br />
* '''5/19: Rapid Fire & Conference Style Talks''' - sign up here via your CCRMA login<br />
** '''Rapid Fire Signups (5 min)'''<br />
*** Speaker 1: CC waveguide mesh, part 2, realtime wavefield output<br />
*** Speaker 2: <br />
*** Speaker 3: <br />
*** Speaker 4: <br />
*** Speaker 5: Ge "ChucK: new features, new bugs, new worlds (ChucKTrip?)"<br />
*** Speaker 6: <br />
*** Speaker 7: <br />
*** Speaker 8: <br />
** '''Conference Style Signups (15 min)'''<br />
*** Speaker 1: Marise van Zyl (rapid fire)<br />
*** Speaker 2: Prateek Verma<br />
*** Speaker 3: Fernando Lopez-Lezcano<br />
* '''5/26: [https://www.decontextualize.com/ Allison Parrish]:''' Poet and Programmer<br />
* '''6/02: [http://sashaleitman.com/about/ Sasha Leitman]:''' Physical Interaction Design for Music<br />
<br />
= Past - Winter Quarter (2021)=<br />
<br />
* 1/13: Break<br />
* '''1/20: Informal Hangout / Dance Party<br />
* '''1/27: <br />
* '''2/03: <br />
* '''2/10: CCRMA Town !! <br />
*'''2/17: Rapid-Fire Talks''' (5 min) - sign up here via your CCRMA login <br />
** Speaker 1: Kunwoo Kim<br />
** Speaker 2: John Chowning<br />
** Speaker 3: Noah Fram<br />
** Speaker 4: Camille Noufi<br />
** Speaker 5: Barbara Nerness<br />
** Speaker 6: (maybe) Julie Zhu<br />
** Speaker 7: Chris Chafe<br />
** Speaker 8: Lloyd May<br />
** Speaker 9: Mike Mulshine<br />
** Speaker 10: Ge Wang<br />
** Speaker 11: Jatin (hopefully)<br />
** Speaker 12: Alex Chechile<br />
** Speaker 13: Fernando Lopez-Lezcano<br />
** Speaker 14:<br />
** Speaker 15:<br />
* '''2/24:<br />
* '''3/03: Conference Style Talks''' (15-20 min) - sign up here via your CCRMA login<br />
** Speaker 1: Ty Sadlier<br />
** Speaker 2: Travis Skare<br />
** Speaker 3: Constantin Basica & Prateek Verma<br />
** Speaker 4: <br />
* '''3/10: Sasha Leitman<br />
* '''3/17: Break<br />
<br />
= Past - Autumn Quarter (2020)=<br />
<span style="color:red">'''In person colloquiua will not be held for the 2020 Autumn Quarter. All events will be held remotely.<br />
<br />
*'''9/16 New Student Introductions'''<br />
** Speaker 1: Lloyd May<br />
** Speaker 2: Andrew Zhu<br />
** Speaker 3: Kathleen Yuan<br />
** Speaker 4: Marise van Zyl<br />
** Speaker 5: Hannah Choi<br />
** Speaker 6: Joss Saltzman<br />
** Speaker 7: Champ Darabundit<br />
** Speaker 8: Clara Allison<br />
** Speaker 9: David Braun<br />
** Speaker 10: Austin Zambito-Valente<br />
<br />
*'''9/23 Faculty/Staff Introductions'''<br />
**Speaker 1: Jonathan Berger<br />
** Speaker 2: Ge Wang<br />
** Speaker 3: Takako Fujioka<br />
** Speaker 4: Seán O Dalaigh (new DMA)<br />
** Speaker 5: Eleanor Selfridge-Field<br />
** Speaker 6: Craig Stuart Sapp<br />
** Speaker 7: Blair Kaneshiro<br />
<br />
*'''9/30 Faculty/Staff Introductions'''<br />
** Speaker 1: Patricia Alessandrini (via video)<br />
** Speaker 2: Julius Smith<br />
** Speaker 3: Marina Bosi<br />
** Speaker 4: Nando (aka Fernando Lopez-Lezcano)<br />
** Speaker 5: Stephanie Sherriff<br />
** Speaker 6: Constantin Basica<br />
** Speaker 7: Matt Wright<br />
** Speaker 8: Chris Chafe<br />
<br />
*10/7 - Break<br />
<br />
*'''10/14 - Town Hall'''<br />
<br />
*'''10/21 - Adjunct Faculty Talks'''<br />
** Speaker 1: Malcolm Slaney<br />
** Speaker 2: Poppy Crum<br />
** Speaker 3: Paul Demarinis<br />
** Speaker 4: Jonathan Abel<br />
** Speaker 5: Doug James<br />
<br />
*11/4 - Break<br />
<br />
*'''11/11 - [https://www.justinsalamon.com/ Justin Salamon (Adobe / NYU)] [https://vimeo.com/480670893 (Watch Again)]'''<br />
<br />
*'''11/18 - Mona Shahnavaz'''<br />
<br />
ABSTRACT & BIO:<br />
Mona is an enthusiastic musician, whose focus and passion has been to<br />
share the joy of music with others. In 2018, a successful outcome of<br />
her innovative music program designed for senior citizens was the<br />
turning point for her to decide to change the course of learning piano<br />
in a less complex route. Her engineering background helped her to<br />
start working on the idea that bridges the gap between music and<br />
technology.<br />
<br />
The approach to fingering in music has always been and still is one of<br />
the major elements of success for keyboard players. Correct fingering<br />
assists the performer in delivering a better technical and musical<br />
performance. This research presents the best technique to generate<br />
fingering for any sequence of music notes. Dynamic programming and<br />
mathematics are major parts of this paper, they work alongside rules<br />
set by pianists to calculate the most practical fingerings for any<br />
musical passage.<br />
<br />
The ultimate goal is to facilitate the process of playing the piano<br />
using an AR platform. This is helpful for scaling music instructors<br />
and allows for efficient teaching. Through solving this problem,<br />
virtual instructions would be more productive and impactful. Success<br />
of this research applied in the AR field can be applied to robotic<br />
tasks in educational programs, video games, and medical fields.<br />
<br />
*11/25 - THANKSGIVING WEEK - Break</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Colloquium&diff=24002Colloquium2022-12-05T18:59:49Z<p>Mulshine: /* Spring Quarter (2023) */</p>
<hr />
<div>'''Wednesday 5:30pm PT (CCRMA Classroom & Zoom)<br />
<br />
The CCRMA Colloquium is a weekly gathering of CCRMA students, faculty, staff, and guests. It is an opportunity for members of the CCRMA community and invited speakers to share the work that they are doing in the fields of Computer Music, Audio Signal Processing and Music Information Retrieval, Psychoacoustics, and related fields. The colloquium traditionally happens every Wednesday during the academic year from 5:30 &#8211; 7:00pm and meets in the CCRMA Classroom, Knoll 217, often also with a Zoom presence.<br />
<br />
Nette and Matt and this wiki page are organizing 2023 colloquia.<br />
<br />
= Fall Quarter (2022)=<br />
<br />
*'''09/28 (Week 1) New Student Introductions<br />
**Speaker 1: Sneha Shah<br />
**Speaker 2: Josh Mitchell<br />
**Speaker 3: Emily Kuo<br />
**Speaker 4: Balazs & Truls<br />
**Speaker 5: Julia Yu<br />
**Speaker 6: Senyuan Fan<br />
**Speaker 7: Celeste Betancur<br />
**Speaker 8: Victoria Litton<br />
**Speaker 9: Yiheng Dong<br />
**Speaker 10 Eito Murakami<br />
**Speaker 11 Soohyun Kim<br />
**Speaker 12 Luna Valentin<br />
**Speaker 13 Terry Feng<br />
**Speaker 14 Alex Han<br />
**Speaker 15 Benny Zhang<br />
**Speaker 16 Sami Wurm<br />
**Speaker 17 Neha Rajagopalan<br />
<br />
*'''10/05 (Week 2) Faculty and Staff Rapid Fire<br />
**Speaker 1 Chris Chafe<br />
**Speaker 2 Ge Wang<br />
**Speaker 3 Patricia Alessandrini<br />
**Speaker 4 Marina Bosi<br />
**Speaker 5 Julius Smith<br />
**Speaker 6 Jarek Kapuściński<br />
**Speaker 7<br />
**Speaker 8<br />
**Speaker 9<br />
**Speaker 10 Mark Rau<br />
**Speaker 11<br />
**Speaker 12<br />
**Speaker 13<br />
**Speaker 14<br />
**Speaker 15<br />
<br />
*'''10/12 (Week 3) WINGS and Peer Mentoring in Music ''' (Mara Mills visit postponed to Spring)<br />
<br />
*'''10/19 (Week 4) Faculty and Staff Rapid Fire, part 2<br />
**Speaker 1 Takako Fujioka<br />
**Speaker 2 Eleanor Selfridge-Field<br />
**Speaker 3 Craig Stuart Sapp<br />
**Speaker 4 Craig Stuart Sapp<br />
**Speaker 5 Poppy Crum<br />
**Speaker 6 Jonathan Berger<br />
**Speaker 7 Matt Wright<br />
**Speaker 8 Fernando Lopez-Lezcano<br />
**Speaker 9 Constantin Basica<br />
**Speaker 10 Stephanie Sherriff<br />
**Speaker 11<br />
**Speaker 12<br />
**Speaker 13<br />
**Speaker 14<br />
**Speaker 15<br />
<br />
*'''10/26 (Week 5) Romain Michon, Tanguy Risset, Maxime Popoff : [https://ccrma.stanford.edu/events/high-level-programming-of-fpgas-audio-real-time-signal-processing-applications High-Level Programming of FPGAs for Audio Real-Time Signal Processing Applications]<br />
<br />
*'''11/2 (Week 6) - TBD<br />
<br />
*'''11/9 (Week 7) - Pizza & Pegagody: Assignments and Evaluations<br />
<br />
*'''11/16 (Week 8) - Student-only Town Hall<br />
<br />
*'''11/23 (Week 9) - NO SEMINAR (Thanksgiving break)<br />
<br />
*'''11/30 (Week 10) - planning session for Winter and Spring colloquium<br />
<br />
<br /><br />
<br />
= Winter Quarter (2023)=<br />
*'''1/11 (Week 1) - (TBD) Kanru Hua, CEO of Synthesizer V, Japan (https://dreamtonics.com/en/synthesizerv/)(Contact: MAMST Student Benny (Shicheng) Zhang<br />
*'''1/18 (Week 2) - Game and International Snacks Night (Nette & Kunwoo)<br />
*'''1/25 (Week 3) - Community-wide rapid-fire talks! ''' Share your recent thoughts, explorations, work, hobbies. Get to know what your peers are up to. Everyone is invited to share. <br />
*'''2/1 (Week 4) - TBD<br />
*'''2/8 (Week 5) - Laura Steenberge (Chris Chafe)<br />
*'''2/15 (Week 6) - Scott Oshiro<br />
*'''2/22 (Week 7) - Webchuck (Chris Chafe + lots of webchuck contributors)<br />
*'''3/1 (Week 8) - [tentative] [https://en.wikipedia.org/wiki/Annette_Vande_Gorne Annette Vande Gorne] (Constantin)<br />
*'''3/8 (Week 9) - TBD<br />
*'''3/15 (Week 10) - [tentative] [https://www.jayafrisando.com Jay Afrisando] (Constantin)<br />
<br />
= Spring Quarter (2023)=<br />
*'''4/5 (Week 1) - [https://adamstanovic.com Adam Stanović] (Constantin)<br />
*'''4/12 (Week 2) - TBD<br />
*'''4/19 (Week 3) - Matt Wright / TBD - how about having it in Bing while the GRAIL is set up?!?!?<br />
*'''4/26 (Week 4) - Aurie Hsu (Julia Mills) tentatively<br />
*'''5/3 (Week 5) - Conference-style talks. ''' This includes longer form presentations or lectures. Reach out to Mike (mulshine@stanford.edu) to sign up. <br />
*'''5/10 (Week 6) - <br />
*'''5/17 (Week 7) - TBD<br />
*'''5/24 (Week 8) - TBD<br />
*'''5/31 (Week 9) - TBD<br />
*'''6/7 (Week 10) - TBD<br />
<br /><br /><br /><br /><br />
<br />
= Past - Spring Quarter (2022)=<br />
<br />
*'''03/30 - Spring Welcome Dinner at Treehouse'''<br />
*'''04/06 - In-House Project and Research Updates (Everyone is encouraged to present)'''<br />
*'''04/13 - BREAK'''<br />
*'''04/19* Tuesday - Installation by J. Mills'''<br />
*'''04/27 - [http://maramills.org/ Mara Mills] (Virtual Talk)'''<br />
*'''05/04 - Inclusive Teaching Workshop (Lloyd May & CTL)'''<br />
*'''05/11 - BREAK '''<br />
*'''05/18 - BREAK'''<br />
*'''05/25 - BREAK'''<br />
*'''06/01 - BREAK'''<br />
<br />
<br />
*''' Future Colloquiums already booked:'''<br />
*10/12 Mara Mills - History of PCM<br />
<br />
= Past - Winter Quarter (2022)=<br />
*'''01/05 - BREAK'''<br />
<br />
*'''01/12 - David Kanaga''' Composer/designer behind the games: [https://store.steampowered.com/app/223450/Dyad/ Dyad], [https://store.steampowered.com/app/219680/Proteus/ Proteus], [https://store.steampowered.com/app/284260/PANORAMICAL/ PANORAMICAL], & the podcast-opera [https://player.fm/series/soft-valkyrie Soft Valkyrie] ([https://stanford.zoom.us/rec/play/mieXwoFVZvaXBeyMXjVYfECaDWF1g-cPB6crtb7F4XDWLBZd9VQrm51DGPZ4MzmkigERYPVV_fEUcpGS._UjZ93byy84C4Z8b?continueMode=true&_x_zm_rtaid=5upEGBnLQ1y0Lo9FUvcfhQ.1642105886449.38f96f3577c62100fbf9b412b90ef670&_x_zm_rhtaid=270 Recording available here])<br />
<br />
*'''01/19 - [http://www.marcevanstein.com/ Marc Evanstein]'''<br />
<br />
*'''01/26 - Walker Davis & Alex Mitchell from [https://boomy.com/ boomy] '''<br />
<br />
*'''02/02 - Social Event and Student-Only Meeting '''<br />
<br />
*'''02/09 - [http://vibeke.info/ Vibeke Sorensen] '''<br />
<br />
*'''02/16 - Unofficial Social and Welcome to WasteLAnd!'''<br />
<br />
*'''02/23 - Break'''<br />
<br />
*'''03/02 - Rapid Fire and Conference Style Talks''' (Sign-ups are open!)<br />
** (Conference Style = 15 minutes, Rapid Fire = 5 minutes)<br />
** Speaker 1: Tamilore Awosile (10 mins)<br />
** Speaker 2: Frank Mondelli (Conference Style)<br />
** Speaker 3: Lloyd May (Rapid Fire)<br />
** Speaker 4: Julia Mills (Conference Style)<br />
** Speaker 5: Nima Farzaneh (Conference Style)<br />
<br />
*'''03/09 - CCRMA Town Hall'''<br />
<br />
*'''03/16 - Break'''<br />
<br />
= Past - Autumn Quarter (2021)=<br />
<br />
*'''9/22 New Student Introductions'''<br />
** Speaker 1: Kimia Koochakzadeh-Yazdi<br />
** Speaker 2: Taylor Goss<br />
** Speaker 3: Julia Mills<br />
** Speaker 4: Kiran Gandhi<br />
** Speaker 5: Dirk Roosenburg<br />
** Speaker 6: Aaron Hodges<br />
** Speaker 7: Nick Shaheed<br />
** Speaker 8: Nima Farzaneh<br />
** Speaker 9: Noah Berrie<br />
** Speaker 10: Angela Lee<br />
<br />
* '''9/29: [https://deutsch.ucsd.edu/ Diana Deutsch] || [https://stanford.zoom.us/rec/play/BqWvlQm3A56M40R7RacsoECfgLyDBceMAKCmv-reuiF9z2vqBe2zOQkuIhYcx5yas_qaI6rNwlJuEhHC.gNOkgLaCa1MoGAwy?continueMode=true&_x_zm_rtaid=pLyEfNNkT1ehEzQ0llwZ3w.1633020343192.a1088fb5aa65f3bfc1eec9f8cd66a807&_x_zm_rhtaid=200 Recording of talk] || [https://ccrma.stanford.edu/~cc/DDeutsch_CCRMA_Video_2021c.webm Pre-recorded lecture (Lightweight)] || [https://ccrma.stanford.edu/~cc/DDeutsch_CCRMA_Video_2021c.mov Pre-recorded lecture (Full-res)]''' <br />
<br />
*'''10/6 [https://stanford.zoom.us/rec/share/-dg0kA_9aJHEPYIahSBeG-5H4Vh9R6uT7M3qBfoBXEWQ69agHXeOyAgzBLskQoaF.rtlqEmUAd8b8p4tb Faculty/Staff Introductions Part 1]'''<br />
**Speaker 1: <br />
** Speaker 2: Matt<br />
** Speaker 3: Ge<br />
** Speaker 4: Takako<br />
** Speaker 5: [https://www.youtube.com/watch?v=rNGpbxAPU1c Julius]<br />
** Speaker 6: <br />
** Speaker 7: <br />
<br />
*'''10/13 Faculty/Staff Introductions Part 2'''<br />
** Speaker 1: Constantin<br />
** Speaker 2: Nando<br />
** Speaker 3: Jonathan (B)<br />
** Speaker 4: Jarek<br />
** Speaker 5: Nick<br />
** Speaker 6: Chris C<br />
** Speaker 7: Patricia<br />
** Speaker 8: Marina<br />
<br />
*'''10/20 - CCRMA Town Hall'''<br />
<br />
*'''10/27 ''' BREAK<br />
<br />
*'''11/03 ''' Social Event (TBD)<br />
<br />
*'''11/10 Rapid Fire & Conference Style '''(Sign-ups are open!)<br />
** (Conference Style = 15 minutes, Rapid Fire = 5 minutes)<br />
** Speaker 1: Lloyd May (Rapid Fire)<br />
** Speaker 2: Chris Chafe (Rapid Fire via zoom, assuming my internet works after the storm)<br />
** Speaker 3: Mark Rau (Rapid Fire)<br />
** Speaker 4: Champ Darabundit (Conference)<br />
** Speaker 5: Eleanor Selfridge-Field (Rapid Fire)<br />
** Speaker 6: Crag Stuart Sapp (Rapid Fire)<br />
<br />
*'''11/17 ''' BREAK<br />
<br />
*'''12/1 Special Guest Talks:''' Nils Tonnätt & Victoria Shen<br />
<br />
= Past - Spring Quarter (2021)=<br />
<br />
* '''3/31: Town Hall<br />
* '''4/07: CCRMA Open House Prep<br />
* '''4/14: <br />
* '''4/21: [[12pm]] - CCRMA Colloquium Phase Shift -1.88 degrees (social hang)<br />
* '''4/28: [http://www.avneeshsarwate.com/ Avneesh Sarwate]:''' Digital Audiovisual Interactive Media <br />
* '''5/05: Break! Rapid-Fire Talks Postponed to 5/19''' <br />
* '''5/12: [http://scattershot.org/ Jeff Snyder]:''' "Unusual Embedded Instruments"<br />
* '''5/19: Rapid Fire & Conference Style Talks''' - sign up here via your CCRMA login<br />
** '''Rapid Fire Signups (5 min)'''<br />
*** Speaker 1: CC waveguide mesh, part 2, realtime wavefield output<br />
*** Speaker 2: <br />
*** Speaker 3: <br />
*** Speaker 4: <br />
*** Speaker 5: Ge "ChucK: new features, new bugs, new worlds (ChucKTrip?)"<br />
*** Speaker 6: <br />
*** Speaker 7: <br />
*** Speaker 8: <br />
** '''Conference Style Signups (15 min)'''<br />
*** Speaker 1: Marise van Zyl (rapid fire)<br />
*** Speaker 2: Prateek Verma<br />
*** Speaker 3: Fernando Lopez-Lezcano<br />
* '''5/26: [https://www.decontextualize.com/ Allison Parrish]:''' Poet and Programmer<br />
* '''6/02: [http://sashaleitman.com/about/ Sasha Leitman]:''' Physical Interaction Design for Music<br />
<br />
= Past - Winter Quarter (2021)=<br />
<br />
* 1/13: Break<br />
* '''1/20: Informal Hangout / Dance Party<br />
* '''1/27: <br />
* '''2/03: <br />
* '''2/10: CCRMA Town !! <br />
*'''2/17: Rapid-Fire Talks''' (5 min) - sign up here via your CCRMA login <br />
** Speaker 1: Kunwoo Kim<br />
** Speaker 2: John Chowning<br />
** Speaker 3: Noah Fram<br />
** Speaker 4: Camille Noufi<br />
** Speaker 5: Barbara Nerness<br />
** Speaker 6: (maybe) Julie Zhu<br />
** Speaker 7: Chris Chafe<br />
** Speaker 8: Lloyd May<br />
** Speaker 9: Mike Mulshine<br />
** Speaker 10: Ge Wang<br />
** Speaker 11: Jatin (hopefully)<br />
** Speaker 12: Alex Chechile<br />
** Speaker 13: Fernando Lopez-Lezcano<br />
** Speaker 14:<br />
** Speaker 15:<br />
* '''2/24:<br />
* '''3/03: Conference Style Talks''' (15-20 min) - sign up here via your CCRMA login<br />
** Speaker 1: Ty Sadlier<br />
** Speaker 2: Travis Skare<br />
** Speaker 3: Constantin Basica & Prateek Verma<br />
** Speaker 4: <br />
* '''3/10: Sasha Leitman<br />
* '''3/17: Break<br />
<br />
= Past - Autumn Quarter (2020)=<br />
<span style="color:red">'''In person colloquiua will not be held for the 2020 Autumn Quarter. All events will be held remotely.<br />
<br />
*'''9/16 New Student Introductions'''<br />
** Speaker 1: Lloyd May<br />
** Speaker 2: Andrew Zhu<br />
** Speaker 3: Kathleen Yuan<br />
** Speaker 4: Marise van Zyl<br />
** Speaker 5: Hannah Choi<br />
** Speaker 6: Joss Saltzman<br />
** Speaker 7: Champ Darabundit<br />
** Speaker 8: Clara Allison<br />
** Speaker 9: David Braun<br />
** Speaker 10: Austin Zambito-Valente<br />
<br />
*'''9/23 Faculty/Staff Introductions'''<br />
**Speaker 1: Jonathan Berger<br />
** Speaker 2: Ge Wang<br />
** Speaker 3: Takako Fujioka<br />
** Speaker 4: Seán O Dalaigh (new DMA)<br />
** Speaker 5: Eleanor Selfridge-Field<br />
** Speaker 6: Craig Stuart Sapp<br />
** Speaker 7: Blair Kaneshiro<br />
<br />
*'''9/30 Faculty/Staff Introductions'''<br />
** Speaker 1: Patricia Alessandrini (via video)<br />
** Speaker 2: Julius Smith<br />
** Speaker 3: Marina Bosi<br />
** Speaker 4: Nando (aka Fernando Lopez-Lezcano)<br />
** Speaker 5: Stephanie Sherriff<br />
** Speaker 6: Constantin Basica<br />
** Speaker 7: Matt Wright<br />
** Speaker 8: Chris Chafe<br />
<br />
*10/7 - Break<br />
<br />
*'''10/14 - Town Hall'''<br />
<br />
*'''10/21 - Adjunct Faculty Talks'''<br />
** Speaker 1: Malcolm Slaney<br />
** Speaker 2: Poppy Crum<br />
** Speaker 3: Paul Demarinis<br />
** Speaker 4: Jonathan Abel<br />
** Speaker 5: Doug James<br />
<br />
*11/4 - Break<br />
<br />
*'''11/11 - [https://www.justinsalamon.com/ Justin Salamon (Adobe / NYU)] [https://vimeo.com/480670893 (Watch Again)]'''<br />
<br />
*'''11/18 - Mona Shahnavaz'''<br />
<br />
ABSTRACT & BIO:<br />
Mona is an enthusiastic musician, whose focus and passion has been to<br />
share the joy of music with others. In 2018, a successful outcome of<br />
her innovative music program designed for senior citizens was the<br />
turning point for her to decide to change the course of learning piano<br />
in a less complex route. Her engineering background helped her to<br />
start working on the idea that bridges the gap between music and<br />
technology.<br />
<br />
The approach to fingering in music has always been and still is one of<br />
the major elements of success for keyboard players. Correct fingering<br />
assists the performer in delivering a better technical and musical<br />
performance. This research presents the best technique to generate<br />
fingering for any sequence of music notes. Dynamic programming and<br />
mathematics are major parts of this paper, they work alongside rules<br />
set by pianists to calculate the most practical fingerings for any<br />
musical passage.<br />
<br />
The ultimate goal is to facilitate the process of playing the piano<br />
using an AR platform. This is helpful for scaling music instructors<br />
and allows for efficient teaching. Through solving this problem,<br />
virtual instructions would be more productive and impactful. Success<br />
of this research applied in the AR field can be applied to robotic<br />
tasks in educational programs, video games, and medical fields.<br />
<br />
*11/25 - THANKSGIVING WEEK - Break</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Colloquium&diff=24001Colloquium2022-12-05T18:59:17Z<p>Mulshine: /* Winter Quarter (2023) */</p>
<hr />
<div>'''Wednesday 5:30pm PT (CCRMA Classroom & Zoom)<br />
<br />
The CCRMA Colloquium is a weekly gathering of CCRMA students, faculty, staff, and guests. It is an opportunity for members of the CCRMA community and invited speakers to share the work that they are doing in the fields of Computer Music, Audio Signal Processing and Music Information Retrieval, Psychoacoustics, and related fields. The colloquium traditionally happens every Wednesday during the academic year from 5:30 &#8211; 7:00pm and meets in the CCRMA Classroom, Knoll 217, often also with a Zoom presence.<br />
<br />
Nette and Matt and this wiki page are organizing 2023 colloquia.<br />
<br />
= Fall Quarter (2022)=<br />
<br />
*'''09/28 (Week 1) New Student Introductions<br />
**Speaker 1: Sneha Shah<br />
**Speaker 2: Josh Mitchell<br />
**Speaker 3: Emily Kuo<br />
**Speaker 4: Balazs & Truls<br />
**Speaker 5: Julia Yu<br />
**Speaker 6: Senyuan Fan<br />
**Speaker 7: Celeste Betancur<br />
**Speaker 8: Victoria Litton<br />
**Speaker 9: Yiheng Dong<br />
**Speaker 10 Eito Murakami<br />
**Speaker 11 Soohyun Kim<br />
**Speaker 12 Luna Valentin<br />
**Speaker 13 Terry Feng<br />
**Speaker 14 Alex Han<br />
**Speaker 15 Benny Zhang<br />
**Speaker 16 Sami Wurm<br />
**Speaker 17 Neha Rajagopalan<br />
<br />
*'''10/05 (Week 2) Faculty and Staff Rapid Fire<br />
**Speaker 1 Chris Chafe<br />
**Speaker 2 Ge Wang<br />
**Speaker 3 Patricia Alessandrini<br />
**Speaker 4 Marina Bosi<br />
**Speaker 5 Julius Smith<br />
**Speaker 6 Jarek Kapuściński<br />
**Speaker 7<br />
**Speaker 8<br />
**Speaker 9<br />
**Speaker 10 Mark Rau<br />
**Speaker 11<br />
**Speaker 12<br />
**Speaker 13<br />
**Speaker 14<br />
**Speaker 15<br />
<br />
*'''10/12 (Week 3) WINGS and Peer Mentoring in Music ''' (Mara Mills visit postponed to Spring)<br />
<br />
*'''10/19 (Week 4) Faculty and Staff Rapid Fire, part 2<br />
**Speaker 1 Takako Fujioka<br />
**Speaker 2 Eleanor Selfridge-Field<br />
**Speaker 3 Craig Stuart Sapp<br />
**Speaker 4 Craig Stuart Sapp<br />
**Speaker 5 Poppy Crum<br />
**Speaker 6 Jonathan Berger<br />
**Speaker 7 Matt Wright<br />
**Speaker 8 Fernando Lopez-Lezcano<br />
**Speaker 9 Constantin Basica<br />
**Speaker 10 Stephanie Sherriff<br />
**Speaker 11<br />
**Speaker 12<br />
**Speaker 13<br />
**Speaker 14<br />
**Speaker 15<br />
<br />
*'''10/26 (Week 5) Romain Michon, Tanguy Risset, Maxime Popoff : [https://ccrma.stanford.edu/events/high-level-programming-of-fpgas-audio-real-time-signal-processing-applications High-Level Programming of FPGAs for Audio Real-Time Signal Processing Applications]<br />
<br />
*'''11/2 (Week 6) - TBD<br />
<br />
*'''11/9 (Week 7) - Pizza & Pegagody: Assignments and Evaluations<br />
<br />
*'''11/16 (Week 8) - Student-only Town Hall<br />
<br />
*'''11/23 (Week 9) - NO SEMINAR (Thanksgiving break)<br />
<br />
*'''11/30 (Week 10) - planning session for Winter and Spring colloquium<br />
<br />
<br /><br />
<br />
= Winter Quarter (2023)=<br />
*'''1/11 (Week 1) - (TBD) Kanru Hua, CEO of Synthesizer V, Japan (https://dreamtonics.com/en/synthesizerv/)(Contact: MAMST Student Benny (Shicheng) Zhang<br />
*'''1/18 (Week 2) - Game and International Snacks Night (Nette & Kunwoo)<br />
*'''1/25 (Week 3) - Community-wide rapid-fire talks! ''' Share your recent thoughts, explorations, work, hobbies. Get to know what your peers are up to. Everyone is invited to share. <br />
*'''2/1 (Week 4) - TBD<br />
*'''2/8 (Week 5) - Laura Steenberge (Chris Chafe)<br />
*'''2/15 (Week 6) - Scott Oshiro<br />
*'''2/22 (Week 7) - Webchuck (Chris Chafe + lots of webchuck contributors)<br />
*'''3/1 (Week 8) - [tentative] [https://en.wikipedia.org/wiki/Annette_Vande_Gorne Annette Vande Gorne] (Constantin)<br />
*'''3/8 (Week 9) - TBD<br />
*'''3/15 (Week 10) - [tentative] [https://www.jayafrisando.com Jay Afrisando] (Constantin)<br />
<br />
= Spring Quarter (2023)=<br />
*'''4/5 (Week 1) - [https://adamstanovic.com Adam Stanović] (Constantin)<br />
*'''4/12 (Week 2) - TBD<br />
*'''4/19 (Week 3) - Matt Wright / TBD - how about having it in Bing while the GRAIL is set up?!?!?<br />
*'''4/26 (Week 4) - Aurie Hsu (Julia Mills) tentatively<br />
*'''5/3 (Week 5) - Conference-style / long-form talks / presentations. (Reach out to Mike (mulshine@stanford.edu) to sign up.) <br />
*'''5/10 (Week 6) - <br />
*'''5/17 (Week 7) - TBD<br />
*'''5/24 (Week 8) - TBD<br />
*'''5/31 (Week 9) - TBD<br />
*'''6/7 (Week 10) - TBD<br />
<br /><br /><br /><br /><br />
<br />
= Past - Spring Quarter (2022)=<br />
<br />
*'''03/30 - Spring Welcome Dinner at Treehouse'''<br />
*'''04/06 - In-House Project and Research Updates (Everyone is encouraged to present)'''<br />
*'''04/13 - BREAK'''<br />
*'''04/19* Tuesday - Installation by J. Mills'''<br />
*'''04/27 - [http://maramills.org/ Mara Mills] (Virtual Talk)'''<br />
*'''05/04 - Inclusive Teaching Workshop (Lloyd May & CTL)'''<br />
*'''05/11 - BREAK '''<br />
*'''05/18 - BREAK'''<br />
*'''05/25 - BREAK'''<br />
*'''06/01 - BREAK'''<br />
<br />
<br />
*''' Future Colloquiums already booked:'''<br />
*10/12 Mara Mills - History of PCM<br />
<br />
= Past - Winter Quarter (2022)=<br />
*'''01/05 - BREAK'''<br />
<br />
*'''01/12 - David Kanaga''' Composer/designer behind the games: [https://store.steampowered.com/app/223450/Dyad/ Dyad], [https://store.steampowered.com/app/219680/Proteus/ Proteus], [https://store.steampowered.com/app/284260/PANORAMICAL/ PANORAMICAL], & the podcast-opera [https://player.fm/series/soft-valkyrie Soft Valkyrie] ([https://stanford.zoom.us/rec/play/mieXwoFVZvaXBeyMXjVYfECaDWF1g-cPB6crtb7F4XDWLBZd9VQrm51DGPZ4MzmkigERYPVV_fEUcpGS._UjZ93byy84C4Z8b?continueMode=true&_x_zm_rtaid=5upEGBnLQ1y0Lo9FUvcfhQ.1642105886449.38f96f3577c62100fbf9b412b90ef670&_x_zm_rhtaid=270 Recording available here])<br />
<br />
*'''01/19 - [http://www.marcevanstein.com/ Marc Evanstein]'''<br />
<br />
*'''01/26 - Walker Davis & Alex Mitchell from [https://boomy.com/ boomy] '''<br />
<br />
*'''02/02 - Social Event and Student-Only Meeting '''<br />
<br />
*'''02/09 - [http://vibeke.info/ Vibeke Sorensen] '''<br />
<br />
*'''02/16 - Unofficial Social and Welcome to WasteLAnd!'''<br />
<br />
*'''02/23 - Break'''<br />
<br />
*'''03/02 - Rapid Fire and Conference Style Talks''' (Sign-ups are open!)<br />
** (Conference Style = 15 minutes, Rapid Fire = 5 minutes)<br />
** Speaker 1: Tamilore Awosile (10 mins)<br />
** Speaker 2: Frank Mondelli (Conference Style)<br />
** Speaker 3: Lloyd May (Rapid Fire)<br />
** Speaker 4: Julia Mills (Conference Style)<br />
** Speaker 5: Nima Farzaneh (Conference Style)<br />
<br />
*'''03/09 - CCRMA Town Hall'''<br />
<br />
*'''03/16 - Break'''<br />
<br />
= Past - Autumn Quarter (2021)=<br />
<br />
*'''9/22 New Student Introductions'''<br />
** Speaker 1: Kimia Koochakzadeh-Yazdi<br />
** Speaker 2: Taylor Goss<br />
** Speaker 3: Julia Mills<br />
** Speaker 4: Kiran Gandhi<br />
** Speaker 5: Dirk Roosenburg<br />
** Speaker 6: Aaron Hodges<br />
** Speaker 7: Nick Shaheed<br />
** Speaker 8: Nima Farzaneh<br />
** Speaker 9: Noah Berrie<br />
** Speaker 10: Angela Lee<br />
<br />
* '''9/29: [https://deutsch.ucsd.edu/ Diana Deutsch] || [https://stanford.zoom.us/rec/play/BqWvlQm3A56M40R7RacsoECfgLyDBceMAKCmv-reuiF9z2vqBe2zOQkuIhYcx5yas_qaI6rNwlJuEhHC.gNOkgLaCa1MoGAwy?continueMode=true&_x_zm_rtaid=pLyEfNNkT1ehEzQ0llwZ3w.1633020343192.a1088fb5aa65f3bfc1eec9f8cd66a807&_x_zm_rhtaid=200 Recording of talk] || [https://ccrma.stanford.edu/~cc/DDeutsch_CCRMA_Video_2021c.webm Pre-recorded lecture (Lightweight)] || [https://ccrma.stanford.edu/~cc/DDeutsch_CCRMA_Video_2021c.mov Pre-recorded lecture (Full-res)]''' <br />
<br />
*'''10/6 [https://stanford.zoom.us/rec/share/-dg0kA_9aJHEPYIahSBeG-5H4Vh9R6uT7M3qBfoBXEWQ69agHXeOyAgzBLskQoaF.rtlqEmUAd8b8p4tb Faculty/Staff Introductions Part 1]'''<br />
**Speaker 1: <br />
** Speaker 2: Matt<br />
** Speaker 3: Ge<br />
** Speaker 4: Takako<br />
** Speaker 5: [https://www.youtube.com/watch?v=rNGpbxAPU1c Julius]<br />
** Speaker 6: <br />
** Speaker 7: <br />
<br />
*'''10/13 Faculty/Staff Introductions Part 2'''<br />
** Speaker 1: Constantin<br />
** Speaker 2: Nando<br />
** Speaker 3: Jonathan (B)<br />
** Speaker 4: Jarek<br />
** Speaker 5: Nick<br />
** Speaker 6: Chris C<br />
** Speaker 7: Patricia<br />
** Speaker 8: Marina<br />
<br />
*'''10/20 - CCRMA Town Hall'''<br />
<br />
*'''10/27 ''' BREAK<br />
<br />
*'''11/03 ''' Social Event (TBD)<br />
<br />
*'''11/10 Rapid Fire & Conference Style '''(Sign-ups are open!)<br />
** (Conference Style = 15 minutes, Rapid Fire = 5 minutes)<br />
** Speaker 1: Lloyd May (Rapid Fire)<br />
** Speaker 2: Chris Chafe (Rapid Fire via zoom, assuming my internet works after the storm)<br />
** Speaker 3: Mark Rau (Rapid Fire)<br />
** Speaker 4: Champ Darabundit (Conference)<br />
** Speaker 5: Eleanor Selfridge-Field (Rapid Fire)<br />
** Speaker 6: Crag Stuart Sapp (Rapid Fire)<br />
<br />
*'''11/17 ''' BREAK<br />
<br />
*'''12/1 Special Guest Talks:''' Nils Tonnätt & Victoria Shen<br />
<br />
= Past - Spring Quarter (2021)=<br />
<br />
* '''3/31: Town Hall<br />
* '''4/07: CCRMA Open House Prep<br />
* '''4/14: <br />
* '''4/21: [[12pm]] - CCRMA Colloquium Phase Shift -1.88 degrees (social hang)<br />
* '''4/28: [http://www.avneeshsarwate.com/ Avneesh Sarwate]:''' Digital Audiovisual Interactive Media <br />
* '''5/05: Break! Rapid-Fire Talks Postponed to 5/19''' <br />
* '''5/12: [http://scattershot.org/ Jeff Snyder]:''' "Unusual Embedded Instruments"<br />
* '''5/19: Rapid Fire & Conference Style Talks''' - sign up here via your CCRMA login<br />
** '''Rapid Fire Signups (5 min)'''<br />
*** Speaker 1: CC waveguide mesh, part 2, realtime wavefield output<br />
*** Speaker 2: <br />
*** Speaker 3: <br />
*** Speaker 4: <br />
*** Speaker 5: Ge "ChucK: new features, new bugs, new worlds (ChucKTrip?)"<br />
*** Speaker 6: <br />
*** Speaker 7: <br />
*** Speaker 8: <br />
** '''Conference Style Signups (15 min)'''<br />
*** Speaker 1: Marise van Zyl (rapid fire)<br />
*** Speaker 2: Prateek Verma<br />
*** Speaker 3: Fernando Lopez-Lezcano<br />
* '''5/26: [https://www.decontextualize.com/ Allison Parrish]:''' Poet and Programmer<br />
* '''6/02: [http://sashaleitman.com/about/ Sasha Leitman]:''' Physical Interaction Design for Music<br />
<br />
= Past - Winter Quarter (2021)=<br />
<br />
* 1/13: Break<br />
* '''1/20: Informal Hangout / Dance Party<br />
* '''1/27: <br />
* '''2/03: <br />
* '''2/10: CCRMA Town !! <br />
*'''2/17: Rapid-Fire Talks''' (5 min) - sign up here via your CCRMA login <br />
** Speaker 1: Kunwoo Kim<br />
** Speaker 2: John Chowning<br />
** Speaker 3: Noah Fram<br />
** Speaker 4: Camille Noufi<br />
** Speaker 5: Barbara Nerness<br />
** Speaker 6: (maybe) Julie Zhu<br />
** Speaker 7: Chris Chafe<br />
** Speaker 8: Lloyd May<br />
** Speaker 9: Mike Mulshine<br />
** Speaker 10: Ge Wang<br />
** Speaker 11: Jatin (hopefully)<br />
** Speaker 12: Alex Chechile<br />
** Speaker 13: Fernando Lopez-Lezcano<br />
** Speaker 14:<br />
** Speaker 15:<br />
* '''2/24:<br />
* '''3/03: Conference Style Talks''' (15-20 min) - sign up here via your CCRMA login<br />
** Speaker 1: Ty Sadlier<br />
** Speaker 2: Travis Skare<br />
** Speaker 3: Constantin Basica & Prateek Verma<br />
** Speaker 4: <br />
* '''3/10: Sasha Leitman<br />
* '''3/17: Break<br />
<br />
= Past - Autumn Quarter (2020)=<br />
<span style="color:red">'''In person colloquiua will not be held for the 2020 Autumn Quarter. All events will be held remotely.<br />
<br />
*'''9/16 New Student Introductions'''<br />
** Speaker 1: Lloyd May<br />
** Speaker 2: Andrew Zhu<br />
** Speaker 3: Kathleen Yuan<br />
** Speaker 4: Marise van Zyl<br />
** Speaker 5: Hannah Choi<br />
** Speaker 6: Joss Saltzman<br />
** Speaker 7: Champ Darabundit<br />
** Speaker 8: Clara Allison<br />
** Speaker 9: David Braun<br />
** Speaker 10: Austin Zambito-Valente<br />
<br />
*'''9/23 Faculty/Staff Introductions'''<br />
**Speaker 1: Jonathan Berger<br />
** Speaker 2: Ge Wang<br />
** Speaker 3: Takako Fujioka<br />
** Speaker 4: Seán O Dalaigh (new DMA)<br />
** Speaker 5: Eleanor Selfridge-Field<br />
** Speaker 6: Craig Stuart Sapp<br />
** Speaker 7: Blair Kaneshiro<br />
<br />
*'''9/30 Faculty/Staff Introductions'''<br />
** Speaker 1: Patricia Alessandrini (via video)<br />
** Speaker 2: Julius Smith<br />
** Speaker 3: Marina Bosi<br />
** Speaker 4: Nando (aka Fernando Lopez-Lezcano)<br />
** Speaker 5: Stephanie Sherriff<br />
** Speaker 6: Constantin Basica<br />
** Speaker 7: Matt Wright<br />
** Speaker 8: Chris Chafe<br />
<br />
*10/7 - Break<br />
<br />
*'''10/14 - Town Hall'''<br />
<br />
*'''10/21 - Adjunct Faculty Talks'''<br />
** Speaker 1: Malcolm Slaney<br />
** Speaker 2: Poppy Crum<br />
** Speaker 3: Paul Demarinis<br />
** Speaker 4: Jonathan Abel<br />
** Speaker 5: Doug James<br />
<br />
*11/4 - Break<br />
<br />
*'''11/11 - [https://www.justinsalamon.com/ Justin Salamon (Adobe / NYU)] [https://vimeo.com/480670893 (Watch Again)]'''<br />
<br />
*'''11/18 - Mona Shahnavaz'''<br />
<br />
ABSTRACT & BIO:<br />
Mona is an enthusiastic musician, whose focus and passion has been to<br />
share the joy of music with others. In 2018, a successful outcome of<br />
her innovative music program designed for senior citizens was the<br />
turning point for her to decide to change the course of learning piano<br />
in a less complex route. Her engineering background helped her to<br />
start working on the idea that bridges the gap between music and<br />
technology.<br />
<br />
The approach to fingering in music has always been and still is one of<br />
the major elements of success for keyboard players. Correct fingering<br />
assists the performer in delivering a better technical and musical<br />
performance. This research presents the best technique to generate<br />
fingering for any sequence of music notes. Dynamic programming and<br />
mathematics are major parts of this paper, they work alongside rules<br />
set by pianists to calculate the most practical fingerings for any<br />
musical passage.<br />
<br />
The ultimate goal is to facilitate the process of playing the piano<br />
using an AR platform. This is helpful for scaling music instructors<br />
and allows for efficient teaching. Through solving this problem,<br />
virtual instructions would be more productive and impactful. Success<br />
of this research applied in the AR field can be applied to robotic<br />
tasks in educational programs, video games, and medical fields.<br />
<br />
*11/25 - THANKSGIVING WEEK - Break</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Colloquium&diff=24000Colloquium2022-12-05T18:59:05Z<p>Mulshine: /* Winter Quarter (2023) */</p>
<hr />
<div>'''Wednesday 5:30pm PT (CCRMA Classroom & Zoom)<br />
<br />
The CCRMA Colloquium is a weekly gathering of CCRMA students, faculty, staff, and guests. It is an opportunity for members of the CCRMA community and invited speakers to share the work that they are doing in the fields of Computer Music, Audio Signal Processing and Music Information Retrieval, Psychoacoustics, and related fields. The colloquium traditionally happens every Wednesday during the academic year from 5:30 &#8211; 7:00pm and meets in the CCRMA Classroom, Knoll 217, often also with a Zoom presence.<br />
<br />
Nette and Matt and this wiki page are organizing 2023 colloquia.<br />
<br />
= Fall Quarter (2022)=<br />
<br />
*'''09/28 (Week 1) New Student Introductions<br />
**Speaker 1: Sneha Shah<br />
**Speaker 2: Josh Mitchell<br />
**Speaker 3: Emily Kuo<br />
**Speaker 4: Balazs & Truls<br />
**Speaker 5: Julia Yu<br />
**Speaker 6: Senyuan Fan<br />
**Speaker 7: Celeste Betancur<br />
**Speaker 8: Victoria Litton<br />
**Speaker 9: Yiheng Dong<br />
**Speaker 10 Eito Murakami<br />
**Speaker 11 Soohyun Kim<br />
**Speaker 12 Luna Valentin<br />
**Speaker 13 Terry Feng<br />
**Speaker 14 Alex Han<br />
**Speaker 15 Benny Zhang<br />
**Speaker 16 Sami Wurm<br />
**Speaker 17 Neha Rajagopalan<br />
<br />
*'''10/05 (Week 2) Faculty and Staff Rapid Fire<br />
**Speaker 1 Chris Chafe<br />
**Speaker 2 Ge Wang<br />
**Speaker 3 Patricia Alessandrini<br />
**Speaker 4 Marina Bosi<br />
**Speaker 5 Julius Smith<br />
**Speaker 6 Jarek Kapuściński<br />
**Speaker 7<br />
**Speaker 8<br />
**Speaker 9<br />
**Speaker 10 Mark Rau<br />
**Speaker 11<br />
**Speaker 12<br />
**Speaker 13<br />
**Speaker 14<br />
**Speaker 15<br />
<br />
*'''10/12 (Week 3) WINGS and Peer Mentoring in Music ''' (Mara Mills visit postponed to Spring)<br />
<br />
*'''10/19 (Week 4) Faculty and Staff Rapid Fire, part 2<br />
**Speaker 1 Takako Fujioka<br />
**Speaker 2 Eleanor Selfridge-Field<br />
**Speaker 3 Craig Stuart Sapp<br />
**Speaker 4 Craig Stuart Sapp<br />
**Speaker 5 Poppy Crum<br />
**Speaker 6 Jonathan Berger<br />
**Speaker 7 Matt Wright<br />
**Speaker 8 Fernando Lopez-Lezcano<br />
**Speaker 9 Constantin Basica<br />
**Speaker 10 Stephanie Sherriff<br />
**Speaker 11<br />
**Speaker 12<br />
**Speaker 13<br />
**Speaker 14<br />
**Speaker 15<br />
<br />
*'''10/26 (Week 5) Romain Michon, Tanguy Risset, Maxime Popoff : [https://ccrma.stanford.edu/events/high-level-programming-of-fpgas-audio-real-time-signal-processing-applications High-Level Programming of FPGAs for Audio Real-Time Signal Processing Applications]<br />
<br />
*'''11/2 (Week 6) - TBD<br />
<br />
*'''11/9 (Week 7) - Pizza & Pegagody: Assignments and Evaluations<br />
<br />
*'''11/16 (Week 8) - Student-only Town Hall<br />
<br />
*'''11/23 (Week 9) - NO SEMINAR (Thanksgiving break)<br />
<br />
*'''11/30 (Week 10) - planning session for Winter and Spring colloquium<br />
<br />
<br /><br />
<br />
= Winter Quarter (2023)=<br />
*'''1/11 (Week 1) - (TBD) Kanru Hua, CEO of Synthesizer V, Japan (https://dreamtonics.com/en/synthesizerv/)(Contact: MAMST Student Benny (Shicheng) Zhang<br />
*'''1/18 (Week 2) - Game and International Snacks Night (Nette & Kunwoo)<br />
*'''1/25 (Week 3) - Community-wide rapid-fire talks. ''' Share your recent thoughts, explorations, work, hobbies. Get to know what your peers are up to. Everyone is invited to share. <br />
*'''2/1 (Week 4) - TBD<br />
*'''2/8 (Week 5) - Laura Steenberge (Chris Chafe)<br />
*'''2/15 (Week 6) - Scott Oshiro<br />
*'''2/22 (Week 7) - Webchuck (Chris Chafe + lots of webchuck contributors)<br />
*'''3/1 (Week 8) - [tentative] [https://en.wikipedia.org/wiki/Annette_Vande_Gorne Annette Vande Gorne] (Constantin)<br />
*'''3/8 (Week 9) - TBD<br />
*'''3/15 (Week 10) - [tentative] [https://www.jayafrisando.com Jay Afrisando] (Constantin)<br />
<br />
= Spring Quarter (2023)=<br />
*'''4/5 (Week 1) - [https://adamstanovic.com Adam Stanović] (Constantin)<br />
*'''4/12 (Week 2) - TBD<br />
*'''4/19 (Week 3) - Matt Wright / TBD - how about having it in Bing while the GRAIL is set up?!?!?<br />
*'''4/26 (Week 4) - Aurie Hsu (Julia Mills) tentatively<br />
*'''5/3 (Week 5) - Conference-style / long-form talks / presentations. (Reach out to Mike (mulshine@stanford.edu) to sign up.) <br />
*'''5/10 (Week 6) - <br />
*'''5/17 (Week 7) - TBD<br />
*'''5/24 (Week 8) - TBD<br />
*'''5/31 (Week 9) - TBD<br />
*'''6/7 (Week 10) - TBD<br />
<br /><br /><br /><br /><br />
<br />
= Past - Spring Quarter (2022)=<br />
<br />
*'''03/30 - Spring Welcome Dinner at Treehouse'''<br />
*'''04/06 - In-House Project and Research Updates (Everyone is encouraged to present)'''<br />
*'''04/13 - BREAK'''<br />
*'''04/19* Tuesday - Installation by J. Mills'''<br />
*'''04/27 - [http://maramills.org/ Mara Mills] (Virtual Talk)'''<br />
*'''05/04 - Inclusive Teaching Workshop (Lloyd May & CTL)'''<br />
*'''05/11 - BREAK '''<br />
*'''05/18 - BREAK'''<br />
*'''05/25 - BREAK'''<br />
*'''06/01 - BREAK'''<br />
<br />
<br />
*''' Future Colloquiums already booked:'''<br />
*10/12 Mara Mills - History of PCM<br />
<br />
= Past - Winter Quarter (2022)=<br />
*'''01/05 - BREAK'''<br />
<br />
*'''01/12 - David Kanaga''' Composer/designer behind the games: [https://store.steampowered.com/app/223450/Dyad/ Dyad], [https://store.steampowered.com/app/219680/Proteus/ Proteus], [https://store.steampowered.com/app/284260/PANORAMICAL/ PANORAMICAL], & the podcast-opera [https://player.fm/series/soft-valkyrie Soft Valkyrie] ([https://stanford.zoom.us/rec/play/mieXwoFVZvaXBeyMXjVYfECaDWF1g-cPB6crtb7F4XDWLBZd9VQrm51DGPZ4MzmkigERYPVV_fEUcpGS._UjZ93byy84C4Z8b?continueMode=true&_x_zm_rtaid=5upEGBnLQ1y0Lo9FUvcfhQ.1642105886449.38f96f3577c62100fbf9b412b90ef670&_x_zm_rhtaid=270 Recording available here])<br />
<br />
*'''01/19 - [http://www.marcevanstein.com/ Marc Evanstein]'''<br />
<br />
*'''01/26 - Walker Davis & Alex Mitchell from [https://boomy.com/ boomy] '''<br />
<br />
*'''02/02 - Social Event and Student-Only Meeting '''<br />
<br />
*'''02/09 - [http://vibeke.info/ Vibeke Sorensen] '''<br />
<br />
*'''02/16 - Unofficial Social and Welcome to WasteLAnd!'''<br />
<br />
*'''02/23 - Break'''<br />
<br />
*'''03/02 - Rapid Fire and Conference Style Talks''' (Sign-ups are open!)<br />
** (Conference Style = 15 minutes, Rapid Fire = 5 minutes)<br />
** Speaker 1: Tamilore Awosile (10 mins)<br />
** Speaker 2: Frank Mondelli (Conference Style)<br />
** Speaker 3: Lloyd May (Rapid Fire)<br />
** Speaker 4: Julia Mills (Conference Style)<br />
** Speaker 5: Nima Farzaneh (Conference Style)<br />
<br />
*'''03/09 - CCRMA Town Hall'''<br />
<br />
*'''03/16 - Break'''<br />
<br />
= Past - Autumn Quarter (2021)=<br />
<br />
*'''9/22 New Student Introductions'''<br />
** Speaker 1: Kimia Koochakzadeh-Yazdi<br />
** Speaker 2: Taylor Goss<br />
** Speaker 3: Julia Mills<br />
** Speaker 4: Kiran Gandhi<br />
** Speaker 5: Dirk Roosenburg<br />
** Speaker 6: Aaron Hodges<br />
** Speaker 7: Nick Shaheed<br />
** Speaker 8: Nima Farzaneh<br />
** Speaker 9: Noah Berrie<br />
** Speaker 10: Angela Lee<br />
<br />
* '''9/29: [https://deutsch.ucsd.edu/ Diana Deutsch] || [https://stanford.zoom.us/rec/play/BqWvlQm3A56M40R7RacsoECfgLyDBceMAKCmv-reuiF9z2vqBe2zOQkuIhYcx5yas_qaI6rNwlJuEhHC.gNOkgLaCa1MoGAwy?continueMode=true&_x_zm_rtaid=pLyEfNNkT1ehEzQ0llwZ3w.1633020343192.a1088fb5aa65f3bfc1eec9f8cd66a807&_x_zm_rhtaid=200 Recording of talk] || [https://ccrma.stanford.edu/~cc/DDeutsch_CCRMA_Video_2021c.webm Pre-recorded lecture (Lightweight)] || [https://ccrma.stanford.edu/~cc/DDeutsch_CCRMA_Video_2021c.mov Pre-recorded lecture (Full-res)]''' <br />
<br />
*'''10/6 [https://stanford.zoom.us/rec/share/-dg0kA_9aJHEPYIahSBeG-5H4Vh9R6uT7M3qBfoBXEWQ69agHXeOyAgzBLskQoaF.rtlqEmUAd8b8p4tb Faculty/Staff Introductions Part 1]'''<br />
**Speaker 1: <br />
** Speaker 2: Matt<br />
** Speaker 3: Ge<br />
** Speaker 4: Takako<br />
** Speaker 5: [https://www.youtube.com/watch?v=rNGpbxAPU1c Julius]<br />
** Speaker 6: <br />
** Speaker 7: <br />
<br />
*'''10/13 Faculty/Staff Introductions Part 2'''<br />
** Speaker 1: Constantin<br />
** Speaker 2: Nando<br />
** Speaker 3: Jonathan (B)<br />
** Speaker 4: Jarek<br />
** Speaker 5: Nick<br />
** Speaker 6: Chris C<br />
** Speaker 7: Patricia<br />
** Speaker 8: Marina<br />
<br />
*'''10/20 - CCRMA Town Hall'''<br />
<br />
*'''10/27 ''' BREAK<br />
<br />
*'''11/03 ''' Social Event (TBD)<br />
<br />
*'''11/10 Rapid Fire & Conference Style '''(Sign-ups are open!)<br />
** (Conference Style = 15 minutes, Rapid Fire = 5 minutes)<br />
** Speaker 1: Lloyd May (Rapid Fire)<br />
** Speaker 2: Chris Chafe (Rapid Fire via zoom, assuming my internet works after the storm)<br />
** Speaker 3: Mark Rau (Rapid Fire)<br />
** Speaker 4: Champ Darabundit (Conference)<br />
** Speaker 5: Eleanor Selfridge-Field (Rapid Fire)<br />
** Speaker 6: Crag Stuart Sapp (Rapid Fire)<br />
<br />
*'''11/17 ''' BREAK<br />
<br />
*'''12/1 Special Guest Talks:''' Nils Tonnätt & Victoria Shen<br />
<br />
= Past - Spring Quarter (2021)=<br />
<br />
* '''3/31: Town Hall<br />
* '''4/07: CCRMA Open House Prep<br />
* '''4/14: <br />
* '''4/21: [[12pm]] - CCRMA Colloquium Phase Shift -1.88 degrees (social hang)<br />
* '''4/28: [http://www.avneeshsarwate.com/ Avneesh Sarwate]:''' Digital Audiovisual Interactive Media <br />
* '''5/05: Break! Rapid-Fire Talks Postponed to 5/19''' <br />
* '''5/12: [http://scattershot.org/ Jeff Snyder]:''' "Unusual Embedded Instruments"<br />
* '''5/19: Rapid Fire & Conference Style Talks''' - sign up here via your CCRMA login<br />
** '''Rapid Fire Signups (5 min)'''<br />
*** Speaker 1: CC waveguide mesh, part 2, realtime wavefield output<br />
*** Speaker 2: <br />
*** Speaker 3: <br />
*** Speaker 4: <br />
*** Speaker 5: Ge "ChucK: new features, new bugs, new worlds (ChucKTrip?)"<br />
*** Speaker 6: <br />
*** Speaker 7: <br />
*** Speaker 8: <br />
** '''Conference Style Signups (15 min)'''<br />
*** Speaker 1: Marise van Zyl (rapid fire)<br />
*** Speaker 2: Prateek Verma<br />
*** Speaker 3: Fernando Lopez-Lezcano<br />
* '''5/26: [https://www.decontextualize.com/ Allison Parrish]:''' Poet and Programmer<br />
* '''6/02: [http://sashaleitman.com/about/ Sasha Leitman]:''' Physical Interaction Design for Music<br />
<br />
= Past - Winter Quarter (2021)=<br />
<br />
* 1/13: Break<br />
* '''1/20: Informal Hangout / Dance Party<br />
* '''1/27: <br />
* '''2/03: <br />
* '''2/10: CCRMA Town !! <br />
*'''2/17: Rapid-Fire Talks''' (5 min) - sign up here via your CCRMA login <br />
** Speaker 1: Kunwoo Kim<br />
** Speaker 2: John Chowning<br />
** Speaker 3: Noah Fram<br />
** Speaker 4: Camille Noufi<br />
** Speaker 5: Barbara Nerness<br />
** Speaker 6: (maybe) Julie Zhu<br />
** Speaker 7: Chris Chafe<br />
** Speaker 8: Lloyd May<br />
** Speaker 9: Mike Mulshine<br />
** Speaker 10: Ge Wang<br />
** Speaker 11: Jatin (hopefully)<br />
** Speaker 12: Alex Chechile<br />
** Speaker 13: Fernando Lopez-Lezcano<br />
** Speaker 14:<br />
** Speaker 15:<br />
* '''2/24:<br />
* '''3/03: Conference Style Talks''' (15-20 min) - sign up here via your CCRMA login<br />
** Speaker 1: Ty Sadlier<br />
** Speaker 2: Travis Skare<br />
** Speaker 3: Constantin Basica & Prateek Verma<br />
** Speaker 4: <br />
* '''3/10: Sasha Leitman<br />
* '''3/17: Break<br />
<br />
= Past - Autumn Quarter (2020)=<br />
<span style="color:red">'''In person colloquiua will not be held for the 2020 Autumn Quarter. All events will be held remotely.<br />
<br />
*'''9/16 New Student Introductions'''<br />
** Speaker 1: Lloyd May<br />
** Speaker 2: Andrew Zhu<br />
** Speaker 3: Kathleen Yuan<br />
** Speaker 4: Marise van Zyl<br />
** Speaker 5: Hannah Choi<br />
** Speaker 6: Joss Saltzman<br />
** Speaker 7: Champ Darabundit<br />
** Speaker 8: Clara Allison<br />
** Speaker 9: David Braun<br />
** Speaker 10: Austin Zambito-Valente<br />
<br />
*'''9/23 Faculty/Staff Introductions'''<br />
**Speaker 1: Jonathan Berger<br />
** Speaker 2: Ge Wang<br />
** Speaker 3: Takako Fujioka<br />
** Speaker 4: Seán O Dalaigh (new DMA)<br />
** Speaker 5: Eleanor Selfridge-Field<br />
** Speaker 6: Craig Stuart Sapp<br />
** Speaker 7: Blair Kaneshiro<br />
<br />
*'''9/30 Faculty/Staff Introductions'''<br />
** Speaker 1: Patricia Alessandrini (via video)<br />
** Speaker 2: Julius Smith<br />
** Speaker 3: Marina Bosi<br />
** Speaker 4: Nando (aka Fernando Lopez-Lezcano)<br />
** Speaker 5: Stephanie Sherriff<br />
** Speaker 6: Constantin Basica<br />
** Speaker 7: Matt Wright<br />
** Speaker 8: Chris Chafe<br />
<br />
*10/7 - Break<br />
<br />
*'''10/14 - Town Hall'''<br />
<br />
*'''10/21 - Adjunct Faculty Talks'''<br />
** Speaker 1: Malcolm Slaney<br />
** Speaker 2: Poppy Crum<br />
** Speaker 3: Paul Demarinis<br />
** Speaker 4: Jonathan Abel<br />
** Speaker 5: Doug James<br />
<br />
*11/4 - Break<br />
<br />
*'''11/11 - [https://www.justinsalamon.com/ Justin Salamon (Adobe / NYU)] [https://vimeo.com/480670893 (Watch Again)]'''<br />
<br />
*'''11/18 - Mona Shahnavaz'''<br />
<br />
ABSTRACT & BIO:<br />
Mona is an enthusiastic musician, whose focus and passion has been to<br />
share the joy of music with others. In 2018, a successful outcome of<br />
her innovative music program designed for senior citizens was the<br />
turning point for her to decide to change the course of learning piano<br />
in a less complex route. Her engineering background helped her to<br />
start working on the idea that bridges the gap between music and<br />
technology.<br />
<br />
The approach to fingering in music has always been and still is one of<br />
the major elements of success for keyboard players. Correct fingering<br />
assists the performer in delivering a better technical and musical<br />
performance. This research presents the best technique to generate<br />
fingering for any sequence of music notes. Dynamic programming and<br />
mathematics are major parts of this paper, they work alongside rules<br />
set by pianists to calculate the most practical fingerings for any<br />
musical passage.<br />
<br />
The ultimate goal is to facilitate the process of playing the piano<br />
using an AR platform. This is helpful for scaling music instructors<br />
and allows for efficient teaching. Through solving this problem,<br />
virtual instructions would be more productive and impactful. Success<br />
of this research applied in the AR field can be applied to robotic<br />
tasks in educational programs, video games, and medical fields.<br />
<br />
*11/25 - THANKSGIVING WEEK - Break</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Colloquium&diff=23999Colloquium2022-12-05T18:58:27Z<p>Mulshine: /* Winter Quarter (2023) */</p>
<hr />
<div>'''Wednesday 5:30pm PT (CCRMA Classroom & Zoom)<br />
<br />
The CCRMA Colloquium is a weekly gathering of CCRMA students, faculty, staff, and guests. It is an opportunity for members of the CCRMA community and invited speakers to share the work that they are doing in the fields of Computer Music, Audio Signal Processing and Music Information Retrieval, Psychoacoustics, and related fields. The colloquium traditionally happens every Wednesday during the academic year from 5:30 &#8211; 7:00pm and meets in the CCRMA Classroom, Knoll 217, often also with a Zoom presence.<br />
<br />
Nette and Matt and this wiki page are organizing 2023 colloquia.<br />
<br />
= Fall Quarter (2022)=<br />
<br />
*'''09/28 (Week 1) New Student Introductions<br />
**Speaker 1: Sneha Shah<br />
**Speaker 2: Josh Mitchell<br />
**Speaker 3: Emily Kuo<br />
**Speaker 4: Balazs & Truls<br />
**Speaker 5: Julia Yu<br />
**Speaker 6: Senyuan Fan<br />
**Speaker 7: Celeste Betancur<br />
**Speaker 8: Victoria Litton<br />
**Speaker 9: Yiheng Dong<br />
**Speaker 10 Eito Murakami<br />
**Speaker 11 Soohyun Kim<br />
**Speaker 12 Luna Valentin<br />
**Speaker 13 Terry Feng<br />
**Speaker 14 Alex Han<br />
**Speaker 15 Benny Zhang<br />
**Speaker 16 Sami Wurm<br />
**Speaker 17 Neha Rajagopalan<br />
<br />
*'''10/05 (Week 2) Faculty and Staff Rapid Fire<br />
**Speaker 1 Chris Chafe<br />
**Speaker 2 Ge Wang<br />
**Speaker 3 Patricia Alessandrini<br />
**Speaker 4 Marina Bosi<br />
**Speaker 5 Julius Smith<br />
**Speaker 6 Jarek Kapuściński<br />
**Speaker 7<br />
**Speaker 8<br />
**Speaker 9<br />
**Speaker 10 Mark Rau<br />
**Speaker 11<br />
**Speaker 12<br />
**Speaker 13<br />
**Speaker 14<br />
**Speaker 15<br />
<br />
*'''10/12 (Week 3) WINGS and Peer Mentoring in Music ''' (Mara Mills visit postponed to Spring)<br />
<br />
*'''10/19 (Week 4) Faculty and Staff Rapid Fire, part 2<br />
**Speaker 1 Takako Fujioka<br />
**Speaker 2 Eleanor Selfridge-Field<br />
**Speaker 3 Craig Stuart Sapp<br />
**Speaker 4 Craig Stuart Sapp<br />
**Speaker 5 Poppy Crum<br />
**Speaker 6 Jonathan Berger<br />
**Speaker 7 Matt Wright<br />
**Speaker 8 Fernando Lopez-Lezcano<br />
**Speaker 9 Constantin Basica<br />
**Speaker 10 Stephanie Sherriff<br />
**Speaker 11<br />
**Speaker 12<br />
**Speaker 13<br />
**Speaker 14<br />
**Speaker 15<br />
<br />
*'''10/26 (Week 5) Romain Michon, Tanguy Risset, Maxime Popoff : [https://ccrma.stanford.edu/events/high-level-programming-of-fpgas-audio-real-time-signal-processing-applications High-Level Programming of FPGAs for Audio Real-Time Signal Processing Applications]<br />
<br />
*'''11/2 (Week 6) - TBD<br />
<br />
*'''11/9 (Week 7) - Pizza & Pegagody: Assignments and Evaluations<br />
<br />
*'''11/16 (Week 8) - Student-only Town Hall<br />
<br />
*'''11/23 (Week 9) - NO SEMINAR (Thanksgiving break)<br />
<br />
*'''11/30 (Week 10) - planning session for Winter and Spring colloquium<br />
<br />
<br /><br />
<br />
= Winter Quarter (2023)=<br />
*'''1/11 (Week 1) - (TBD) Kanru Hua, CEO of Synthesizer V, Japan (https://dreamtonics.com/en/synthesizerv/)(Contact: MAMST Student Benny (Shicheng) Zhang<br />
*'''1/18 (Week 2) - Game and International Snacks Night (Nette & Kunwoo)<br />
*'''1/25 (Week 3) - Community-wide rapid-fire talks. ''' Share your recent thoughts, explorations, work, hobbies. Get to know what your peers are up to. <br />
*'''2/1 (Week 4) - TBD<br />
*'''2/8 (Week 5) - Laura Steenberge (Chris Chafe)<br />
*'''2/15 (Week 6) - Scott Oshiro<br />
*'''2/22 (Week 7) - Webchuck (Chris Chafe + lots of webchuck contributors)<br />
*'''3/1 (Week 8) - [tentative] [https://en.wikipedia.org/wiki/Annette_Vande_Gorne Annette Vande Gorne] (Constantin)<br />
*'''3/8 (Week 9) - TBD<br />
*'''3/15 (Week 10) - [tentative] [https://www.jayafrisando.com Jay Afrisando] (Constantin)<br />
<br />
= Spring Quarter (2023)=<br />
*'''4/5 (Week 1) - [https://adamstanovic.com Adam Stanović] (Constantin)<br />
*'''4/12 (Week 2) - TBD<br />
*'''4/19 (Week 3) - Matt Wright / TBD - how about having it in Bing while the GRAIL is set up?!?!?<br />
*'''4/26 (Week 4) - Aurie Hsu (Julia Mills) tentatively<br />
*'''5/3 (Week 5) - Conference-style / long-form talks / presentations. (Reach out to Mike (mulshine@stanford.edu) to sign up.) <br />
*'''5/10 (Week 6) - <br />
*'''5/17 (Week 7) - TBD<br />
*'''5/24 (Week 8) - TBD<br />
*'''5/31 (Week 9) - TBD<br />
*'''6/7 (Week 10) - TBD<br />
<br /><br /><br /><br /><br />
<br />
= Past - Spring Quarter (2022)=<br />
<br />
*'''03/30 - Spring Welcome Dinner at Treehouse'''<br />
*'''04/06 - In-House Project and Research Updates (Everyone is encouraged to present)'''<br />
*'''04/13 - BREAK'''<br />
*'''04/19* Tuesday - Installation by J. Mills'''<br />
*'''04/27 - [http://maramills.org/ Mara Mills] (Virtual Talk)'''<br />
*'''05/04 - Inclusive Teaching Workshop (Lloyd May & CTL)'''<br />
*'''05/11 - BREAK '''<br />
*'''05/18 - BREAK'''<br />
*'''05/25 - BREAK'''<br />
*'''06/01 - BREAK'''<br />
<br />
<br />
*''' Future Colloquiums already booked:'''<br />
*10/12 Mara Mills - History of PCM<br />
<br />
= Past - Winter Quarter (2022)=<br />
*'''01/05 - BREAK'''<br />
<br />
*'''01/12 - David Kanaga''' Composer/designer behind the games: [https://store.steampowered.com/app/223450/Dyad/ Dyad], [https://store.steampowered.com/app/219680/Proteus/ Proteus], [https://store.steampowered.com/app/284260/PANORAMICAL/ PANORAMICAL], & the podcast-opera [https://player.fm/series/soft-valkyrie Soft Valkyrie] ([https://stanford.zoom.us/rec/play/mieXwoFVZvaXBeyMXjVYfECaDWF1g-cPB6crtb7F4XDWLBZd9VQrm51DGPZ4MzmkigERYPVV_fEUcpGS._UjZ93byy84C4Z8b?continueMode=true&_x_zm_rtaid=5upEGBnLQ1y0Lo9FUvcfhQ.1642105886449.38f96f3577c62100fbf9b412b90ef670&_x_zm_rhtaid=270 Recording available here])<br />
<br />
*'''01/19 - [http://www.marcevanstein.com/ Marc Evanstein]'''<br />
<br />
*'''01/26 - Walker Davis & Alex Mitchell from [https://boomy.com/ boomy] '''<br />
<br />
*'''02/02 - Social Event and Student-Only Meeting '''<br />
<br />
*'''02/09 - [http://vibeke.info/ Vibeke Sorensen] '''<br />
<br />
*'''02/16 - Unofficial Social and Welcome to WasteLAnd!'''<br />
<br />
*'''02/23 - Break'''<br />
<br />
*'''03/02 - Rapid Fire and Conference Style Talks''' (Sign-ups are open!)<br />
** (Conference Style = 15 minutes, Rapid Fire = 5 minutes)<br />
** Speaker 1: Tamilore Awosile (10 mins)<br />
** Speaker 2: Frank Mondelli (Conference Style)<br />
** Speaker 3: Lloyd May (Rapid Fire)<br />
** Speaker 4: Julia Mills (Conference Style)<br />
** Speaker 5: Nima Farzaneh (Conference Style)<br />
<br />
*'''03/09 - CCRMA Town Hall'''<br />
<br />
*'''03/16 - Break'''<br />
<br />
= Past - Autumn Quarter (2021)=<br />
<br />
*'''9/22 New Student Introductions'''<br />
** Speaker 1: Kimia Koochakzadeh-Yazdi<br />
** Speaker 2: Taylor Goss<br />
** Speaker 3: Julia Mills<br />
** Speaker 4: Kiran Gandhi<br />
** Speaker 5: Dirk Roosenburg<br />
** Speaker 6: Aaron Hodges<br />
** Speaker 7: Nick Shaheed<br />
** Speaker 8: Nima Farzaneh<br />
** Speaker 9: Noah Berrie<br />
** Speaker 10: Angela Lee<br />
<br />
* '''9/29: [https://deutsch.ucsd.edu/ Diana Deutsch] || [https://stanford.zoom.us/rec/play/BqWvlQm3A56M40R7RacsoECfgLyDBceMAKCmv-reuiF9z2vqBe2zOQkuIhYcx5yas_qaI6rNwlJuEhHC.gNOkgLaCa1MoGAwy?continueMode=true&_x_zm_rtaid=pLyEfNNkT1ehEzQ0llwZ3w.1633020343192.a1088fb5aa65f3bfc1eec9f8cd66a807&_x_zm_rhtaid=200 Recording of talk] || [https://ccrma.stanford.edu/~cc/DDeutsch_CCRMA_Video_2021c.webm Pre-recorded lecture (Lightweight)] || [https://ccrma.stanford.edu/~cc/DDeutsch_CCRMA_Video_2021c.mov Pre-recorded lecture (Full-res)]''' <br />
<br />
*'''10/6 [https://stanford.zoom.us/rec/share/-dg0kA_9aJHEPYIahSBeG-5H4Vh9R6uT7M3qBfoBXEWQ69agHXeOyAgzBLskQoaF.rtlqEmUAd8b8p4tb Faculty/Staff Introductions Part 1]'''<br />
**Speaker 1: <br />
** Speaker 2: Matt<br />
** Speaker 3: Ge<br />
** Speaker 4: Takako<br />
** Speaker 5: [https://www.youtube.com/watch?v=rNGpbxAPU1c Julius]<br />
** Speaker 6: <br />
** Speaker 7: <br />
<br />
*'''10/13 Faculty/Staff Introductions Part 2'''<br />
** Speaker 1: Constantin<br />
** Speaker 2: Nando<br />
** Speaker 3: Jonathan (B)<br />
** Speaker 4: Jarek<br />
** Speaker 5: Nick<br />
** Speaker 6: Chris C<br />
** Speaker 7: Patricia<br />
** Speaker 8: Marina<br />
<br />
*'''10/20 - CCRMA Town Hall'''<br />
<br />
*'''10/27 ''' BREAK<br />
<br />
*'''11/03 ''' Social Event (TBD)<br />
<br />
*'''11/10 Rapid Fire & Conference Style '''(Sign-ups are open!)<br />
** (Conference Style = 15 minutes, Rapid Fire = 5 minutes)<br />
** Speaker 1: Lloyd May (Rapid Fire)<br />
** Speaker 2: Chris Chafe (Rapid Fire via zoom, assuming my internet works after the storm)<br />
** Speaker 3: Mark Rau (Rapid Fire)<br />
** Speaker 4: Champ Darabundit (Conference)<br />
** Speaker 5: Eleanor Selfridge-Field (Rapid Fire)<br />
** Speaker 6: Crag Stuart Sapp (Rapid Fire)<br />
<br />
*'''11/17 ''' BREAK<br />
<br />
*'''12/1 Special Guest Talks:''' Nils Tonnätt & Victoria Shen<br />
<br />
= Past - Spring Quarter (2021)=<br />
<br />
* '''3/31: Town Hall<br />
* '''4/07: CCRMA Open House Prep<br />
* '''4/14: <br />
* '''4/21: [[12pm]] - CCRMA Colloquium Phase Shift -1.88 degrees (social hang)<br />
* '''4/28: [http://www.avneeshsarwate.com/ Avneesh Sarwate]:''' Digital Audiovisual Interactive Media <br />
* '''5/05: Break! Rapid-Fire Talks Postponed to 5/19''' <br />
* '''5/12: [http://scattershot.org/ Jeff Snyder]:''' "Unusual Embedded Instruments"<br />
* '''5/19: Rapid Fire & Conference Style Talks''' - sign up here via your CCRMA login<br />
** '''Rapid Fire Signups (5 min)'''<br />
*** Speaker 1: CC waveguide mesh, part 2, realtime wavefield output<br />
*** Speaker 2: <br />
*** Speaker 3: <br />
*** Speaker 4: <br />
*** Speaker 5: Ge "ChucK: new features, new bugs, new worlds (ChucKTrip?)"<br />
*** Speaker 6: <br />
*** Speaker 7: <br />
*** Speaker 8: <br />
** '''Conference Style Signups (15 min)'''<br />
*** Speaker 1: Marise van Zyl (rapid fire)<br />
*** Speaker 2: Prateek Verma<br />
*** Speaker 3: Fernando Lopez-Lezcano<br />
* '''5/26: [https://www.decontextualize.com/ Allison Parrish]:''' Poet and Programmer<br />
* '''6/02: [http://sashaleitman.com/about/ Sasha Leitman]:''' Physical Interaction Design for Music<br />
<br />
= Past - Winter Quarter (2021)=<br />
<br />
* 1/13: Break<br />
* '''1/20: Informal Hangout / Dance Party<br />
* '''1/27: <br />
* '''2/03: <br />
* '''2/10: CCRMA Town !! <br />
*'''2/17: Rapid-Fire Talks''' (5 min) - sign up here via your CCRMA login <br />
** Speaker 1: Kunwoo Kim<br />
** Speaker 2: John Chowning<br />
** Speaker 3: Noah Fram<br />
** Speaker 4: Camille Noufi<br />
** Speaker 5: Barbara Nerness<br />
** Speaker 6: (maybe) Julie Zhu<br />
** Speaker 7: Chris Chafe<br />
** Speaker 8: Lloyd May<br />
** Speaker 9: Mike Mulshine<br />
** Speaker 10: Ge Wang<br />
** Speaker 11: Jatin (hopefully)<br />
** Speaker 12: Alex Chechile<br />
** Speaker 13: Fernando Lopez-Lezcano<br />
** Speaker 14:<br />
** Speaker 15:<br />
* '''2/24:<br />
* '''3/03: Conference Style Talks''' (15-20 min) - sign up here via your CCRMA login<br />
** Speaker 1: Ty Sadlier<br />
** Speaker 2: Travis Skare<br />
** Speaker 3: Constantin Basica & Prateek Verma<br />
** Speaker 4: <br />
* '''3/10: Sasha Leitman<br />
* '''3/17: Break<br />
<br />
= Past - Autumn Quarter (2020)=<br />
<span style="color:red">'''In person colloquiua will not be held for the 2020 Autumn Quarter. All events will be held remotely.<br />
<br />
*'''9/16 New Student Introductions'''<br />
** Speaker 1: Lloyd May<br />
** Speaker 2: Andrew Zhu<br />
** Speaker 3: Kathleen Yuan<br />
** Speaker 4: Marise van Zyl<br />
** Speaker 5: Hannah Choi<br />
** Speaker 6: Joss Saltzman<br />
** Speaker 7: Champ Darabundit<br />
** Speaker 8: Clara Allison<br />
** Speaker 9: David Braun<br />
** Speaker 10: Austin Zambito-Valente<br />
<br />
*'''9/23 Faculty/Staff Introductions'''<br />
**Speaker 1: Jonathan Berger<br />
** Speaker 2: Ge Wang<br />
** Speaker 3: Takako Fujioka<br />
** Speaker 4: Seán O Dalaigh (new DMA)<br />
** Speaker 5: Eleanor Selfridge-Field<br />
** Speaker 6: Craig Stuart Sapp<br />
** Speaker 7: Blair Kaneshiro<br />
<br />
*'''9/30 Faculty/Staff Introductions'''<br />
** Speaker 1: Patricia Alessandrini (via video)<br />
** Speaker 2: Julius Smith<br />
** Speaker 3: Marina Bosi<br />
** Speaker 4: Nando (aka Fernando Lopez-Lezcano)<br />
** Speaker 5: Stephanie Sherriff<br />
** Speaker 6: Constantin Basica<br />
** Speaker 7: Matt Wright<br />
** Speaker 8: Chris Chafe<br />
<br />
*10/7 - Break<br />
<br />
*'''10/14 - Town Hall'''<br />
<br />
*'''10/21 - Adjunct Faculty Talks'''<br />
** Speaker 1: Malcolm Slaney<br />
** Speaker 2: Poppy Crum<br />
** Speaker 3: Paul Demarinis<br />
** Speaker 4: Jonathan Abel<br />
** Speaker 5: Doug James<br />
<br />
*11/4 - Break<br />
<br />
*'''11/11 - [https://www.justinsalamon.com/ Justin Salamon (Adobe / NYU)] [https://vimeo.com/480670893 (Watch Again)]'''<br />
<br />
*'''11/18 - Mona Shahnavaz'''<br />
<br />
ABSTRACT & BIO:<br />
Mona is an enthusiastic musician, whose focus and passion has been to<br />
share the joy of music with others. In 2018, a successful outcome of<br />
her innovative music program designed for senior citizens was the<br />
turning point for her to decide to change the course of learning piano<br />
in a less complex route. Her engineering background helped her to<br />
start working on the idea that bridges the gap between music and<br />
technology.<br />
<br />
The approach to fingering in music has always been and still is one of<br />
the major elements of success for keyboard players. Correct fingering<br />
assists the performer in delivering a better technical and musical<br />
performance. This research presents the best technique to generate<br />
fingering for any sequence of music notes. Dynamic programming and<br />
mathematics are major parts of this paper, they work alongside rules<br />
set by pianists to calculate the most practical fingerings for any<br />
musical passage.<br />
<br />
The ultimate goal is to facilitate the process of playing the piano<br />
using an AR platform. This is helpful for scaling music instructors<br />
and allows for efficient teaching. Through solving this problem,<br />
virtual instructions would be more productive and impactful. Success<br />
of this research applied in the AR field can be applied to robotic<br />
tasks in educational programs, video games, and medical fields.<br />
<br />
*11/25 - THANKSGIVING WEEK - Break</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Colloquium&diff=23998Colloquium2022-12-05T18:57:42Z<p>Mulshine: </p>
<hr />
<div>'''Wednesday 5:30pm PT (CCRMA Classroom & Zoom)<br />
<br />
The CCRMA Colloquium is a weekly gathering of CCRMA students, faculty, staff, and guests. It is an opportunity for members of the CCRMA community and invited speakers to share the work that they are doing in the fields of Computer Music, Audio Signal Processing and Music Information Retrieval, Psychoacoustics, and related fields. The colloquium traditionally happens every Wednesday during the academic year from 5:30 &#8211; 7:00pm and meets in the CCRMA Classroom, Knoll 217, often also with a Zoom presence.<br />
<br />
Nette and Matt and this wiki page are organizing 2023 colloquia.<br />
<br />
= Fall Quarter (2022)=<br />
<br />
*'''09/28 (Week 1) New Student Introductions<br />
**Speaker 1: Sneha Shah<br />
**Speaker 2: Josh Mitchell<br />
**Speaker 3: Emily Kuo<br />
**Speaker 4: Balazs & Truls<br />
**Speaker 5: Julia Yu<br />
**Speaker 6: Senyuan Fan<br />
**Speaker 7: Celeste Betancur<br />
**Speaker 8: Victoria Litton<br />
**Speaker 9: Yiheng Dong<br />
**Speaker 10 Eito Murakami<br />
**Speaker 11 Soohyun Kim<br />
**Speaker 12 Luna Valentin<br />
**Speaker 13 Terry Feng<br />
**Speaker 14 Alex Han<br />
**Speaker 15 Benny Zhang<br />
**Speaker 16 Sami Wurm<br />
**Speaker 17 Neha Rajagopalan<br />
<br />
*'''10/05 (Week 2) Faculty and Staff Rapid Fire<br />
**Speaker 1 Chris Chafe<br />
**Speaker 2 Ge Wang<br />
**Speaker 3 Patricia Alessandrini<br />
**Speaker 4 Marina Bosi<br />
**Speaker 5 Julius Smith<br />
**Speaker 6 Jarek Kapuściński<br />
**Speaker 7<br />
**Speaker 8<br />
**Speaker 9<br />
**Speaker 10 Mark Rau<br />
**Speaker 11<br />
**Speaker 12<br />
**Speaker 13<br />
**Speaker 14<br />
**Speaker 15<br />
<br />
*'''10/12 (Week 3) WINGS and Peer Mentoring in Music ''' (Mara Mills visit postponed to Spring)<br />
<br />
*'''10/19 (Week 4) Faculty and Staff Rapid Fire, part 2<br />
**Speaker 1 Takako Fujioka<br />
**Speaker 2 Eleanor Selfridge-Field<br />
**Speaker 3 Craig Stuart Sapp<br />
**Speaker 4 Craig Stuart Sapp<br />
**Speaker 5 Poppy Crum<br />
**Speaker 6 Jonathan Berger<br />
**Speaker 7 Matt Wright<br />
**Speaker 8 Fernando Lopez-Lezcano<br />
**Speaker 9 Constantin Basica<br />
**Speaker 10 Stephanie Sherriff<br />
**Speaker 11<br />
**Speaker 12<br />
**Speaker 13<br />
**Speaker 14<br />
**Speaker 15<br />
<br />
*'''10/26 (Week 5) Romain Michon, Tanguy Risset, Maxime Popoff : [https://ccrma.stanford.edu/events/high-level-programming-of-fpgas-audio-real-time-signal-processing-applications High-Level Programming of FPGAs for Audio Real-Time Signal Processing Applications]<br />
<br />
*'''11/2 (Week 6) - TBD<br />
<br />
*'''11/9 (Week 7) - Pizza & Pegagody: Assignments and Evaluations<br />
<br />
*'''11/16 (Week 8) - Student-only Town Hall<br />
<br />
*'''11/23 (Week 9) - NO SEMINAR (Thanksgiving break)<br />
<br />
*'''11/30 (Week 10) - planning session for Winter and Spring colloquium<br />
<br />
<br /><br />
<br />
= Winter Quarter (2023)=<br />
*'''1/11 (Week 1) - (TBD) Kanru Hua, CEO of Synthesizer V, Japan (https://dreamtonics.com/en/synthesizerv/)(Contact: MAMST Student Benny (Shicheng) Zhang<br />
*'''1/18 (Week 2) - Game and International Snacks Night (Nette & Kunwoo)<br />
*'''1/25 (Week 3) - Community-wide rapid-fire talks. Share your recent thoughts, explorations, work, hobbies. Get to know what your peers are up to. <br />
*'''2/1 (Week 4) - TBD<br />
*'''2/8 (Week 5) - Laura Steenberge (Chris Chafe)<br />
*'''2/15 (Week 6) - Scott Oshiro<br />
*'''2/22 (Week 7) - Webchuck (Chris Chafe + lots of webchuck contributors)<br />
*'''3/1 (Week 8) - [tentative] [https://en.wikipedia.org/wiki/Annette_Vande_Gorne Annette Vande Gorne] (Constantin)<br />
*'''3/8 (Week 9) - TBD<br />
*'''3/15 (Week 10) - [tentative] [https://www.jayafrisando.com Jay Afrisando] (Constantin)<br />
<br />
= Spring Quarter (2023)=<br />
*'''4/5 (Week 1) - [https://adamstanovic.com Adam Stanović] (Constantin)<br />
*'''4/12 (Week 2) - TBD<br />
*'''4/19 (Week 3) - Matt Wright / TBD - how about having it in Bing while the GRAIL is set up?!?!?<br />
*'''4/26 (Week 4) - Aurie Hsu (Julia Mills) tentatively<br />
*'''5/3 (Week 5) - Conference-style / long-form talks / presentations. (Reach out to Mike (mulshine@stanford.edu) to sign up.) <br />
*'''5/10 (Week 6) - <br />
*'''5/17 (Week 7) - TBD<br />
*'''5/24 (Week 8) - TBD<br />
*'''5/31 (Week 9) - TBD<br />
*'''6/7 (Week 10) - TBD<br />
<br /><br /><br /><br /><br />
<br />
= Past - Spring Quarter (2022)=<br />
<br />
*'''03/30 - Spring Welcome Dinner at Treehouse'''<br />
*'''04/06 - In-House Project and Research Updates (Everyone is encouraged to present)'''<br />
*'''04/13 - BREAK'''<br />
*'''04/19* Tuesday - Installation by J. Mills'''<br />
*'''04/27 - [http://maramills.org/ Mara Mills] (Virtual Talk)'''<br />
*'''05/04 - Inclusive Teaching Workshop (Lloyd May & CTL)'''<br />
*'''05/11 - BREAK '''<br />
*'''05/18 - BREAK'''<br />
*'''05/25 - BREAK'''<br />
*'''06/01 - BREAK'''<br />
<br />
<br />
*''' Future Colloquiums already booked:'''<br />
*10/12 Mara Mills - History of PCM<br />
<br />
= Past - Winter Quarter (2022)=<br />
*'''01/05 - BREAK'''<br />
<br />
*'''01/12 - David Kanaga''' Composer/designer behind the games: [https://store.steampowered.com/app/223450/Dyad/ Dyad], [https://store.steampowered.com/app/219680/Proteus/ Proteus], [https://store.steampowered.com/app/284260/PANORAMICAL/ PANORAMICAL], & the podcast-opera [https://player.fm/series/soft-valkyrie Soft Valkyrie] ([https://stanford.zoom.us/rec/play/mieXwoFVZvaXBeyMXjVYfECaDWF1g-cPB6crtb7F4XDWLBZd9VQrm51DGPZ4MzmkigERYPVV_fEUcpGS._UjZ93byy84C4Z8b?continueMode=true&_x_zm_rtaid=5upEGBnLQ1y0Lo9FUvcfhQ.1642105886449.38f96f3577c62100fbf9b412b90ef670&_x_zm_rhtaid=270 Recording available here])<br />
<br />
*'''01/19 - [http://www.marcevanstein.com/ Marc Evanstein]'''<br />
<br />
*'''01/26 - Walker Davis & Alex Mitchell from [https://boomy.com/ boomy] '''<br />
<br />
*'''02/02 - Social Event and Student-Only Meeting '''<br />
<br />
*'''02/09 - [http://vibeke.info/ Vibeke Sorensen] '''<br />
<br />
*'''02/16 - Unofficial Social and Welcome to WasteLAnd!'''<br />
<br />
*'''02/23 - Break'''<br />
<br />
*'''03/02 - Rapid Fire and Conference Style Talks''' (Sign-ups are open!)<br />
** (Conference Style = 15 minutes, Rapid Fire = 5 minutes)<br />
** Speaker 1: Tamilore Awosile (10 mins)<br />
** Speaker 2: Frank Mondelli (Conference Style)<br />
** Speaker 3: Lloyd May (Rapid Fire)<br />
** Speaker 4: Julia Mills (Conference Style)<br />
** Speaker 5: Nima Farzaneh (Conference Style)<br />
<br />
*'''03/09 - CCRMA Town Hall'''<br />
<br />
*'''03/16 - Break'''<br />
<br />
= Past - Autumn Quarter (2021)=<br />
<br />
*'''9/22 New Student Introductions'''<br />
** Speaker 1: Kimia Koochakzadeh-Yazdi<br />
** Speaker 2: Taylor Goss<br />
** Speaker 3: Julia Mills<br />
** Speaker 4: Kiran Gandhi<br />
** Speaker 5: Dirk Roosenburg<br />
** Speaker 6: Aaron Hodges<br />
** Speaker 7: Nick Shaheed<br />
** Speaker 8: Nima Farzaneh<br />
** Speaker 9: Noah Berrie<br />
** Speaker 10: Angela Lee<br />
<br />
* '''9/29: [https://deutsch.ucsd.edu/ Diana Deutsch] || [https://stanford.zoom.us/rec/play/BqWvlQm3A56M40R7RacsoECfgLyDBceMAKCmv-reuiF9z2vqBe2zOQkuIhYcx5yas_qaI6rNwlJuEhHC.gNOkgLaCa1MoGAwy?continueMode=true&_x_zm_rtaid=pLyEfNNkT1ehEzQ0llwZ3w.1633020343192.a1088fb5aa65f3bfc1eec9f8cd66a807&_x_zm_rhtaid=200 Recording of talk] || [https://ccrma.stanford.edu/~cc/DDeutsch_CCRMA_Video_2021c.webm Pre-recorded lecture (Lightweight)] || [https://ccrma.stanford.edu/~cc/DDeutsch_CCRMA_Video_2021c.mov Pre-recorded lecture (Full-res)]''' <br />
<br />
*'''10/6 [https://stanford.zoom.us/rec/share/-dg0kA_9aJHEPYIahSBeG-5H4Vh9R6uT7M3qBfoBXEWQ69agHXeOyAgzBLskQoaF.rtlqEmUAd8b8p4tb Faculty/Staff Introductions Part 1]'''<br />
**Speaker 1: <br />
** Speaker 2: Matt<br />
** Speaker 3: Ge<br />
** Speaker 4: Takako<br />
** Speaker 5: [https://www.youtube.com/watch?v=rNGpbxAPU1c Julius]<br />
** Speaker 6: <br />
** Speaker 7: <br />
<br />
*'''10/13 Faculty/Staff Introductions Part 2'''<br />
** Speaker 1: Constantin<br />
** Speaker 2: Nando<br />
** Speaker 3: Jonathan (B)<br />
** Speaker 4: Jarek<br />
** Speaker 5: Nick<br />
** Speaker 6: Chris C<br />
** Speaker 7: Patricia<br />
** Speaker 8: Marina<br />
<br />
*'''10/20 - CCRMA Town Hall'''<br />
<br />
*'''10/27 ''' BREAK<br />
<br />
*'''11/03 ''' Social Event (TBD)<br />
<br />
*'''11/10 Rapid Fire & Conference Style '''(Sign-ups are open!)<br />
** (Conference Style = 15 minutes, Rapid Fire = 5 minutes)<br />
** Speaker 1: Lloyd May (Rapid Fire)<br />
** Speaker 2: Chris Chafe (Rapid Fire via zoom, assuming my internet works after the storm)<br />
** Speaker 3: Mark Rau (Rapid Fire)<br />
** Speaker 4: Champ Darabundit (Conference)<br />
** Speaker 5: Eleanor Selfridge-Field (Rapid Fire)<br />
** Speaker 6: Crag Stuart Sapp (Rapid Fire)<br />
<br />
*'''11/17 ''' BREAK<br />
<br />
*'''12/1 Special Guest Talks:''' Nils Tonnätt & Victoria Shen<br />
<br />
= Past - Spring Quarter (2021)=<br />
<br />
* '''3/31: Town Hall<br />
* '''4/07: CCRMA Open House Prep<br />
* '''4/14: <br />
* '''4/21: [[12pm]] - CCRMA Colloquium Phase Shift -1.88 degrees (social hang)<br />
* '''4/28: [http://www.avneeshsarwate.com/ Avneesh Sarwate]:''' Digital Audiovisual Interactive Media <br />
* '''5/05: Break! Rapid-Fire Talks Postponed to 5/19''' <br />
* '''5/12: [http://scattershot.org/ Jeff Snyder]:''' "Unusual Embedded Instruments"<br />
* '''5/19: Rapid Fire & Conference Style Talks''' - sign up here via your CCRMA login<br />
** '''Rapid Fire Signups (5 min)'''<br />
*** Speaker 1: CC waveguide mesh, part 2, realtime wavefield output<br />
*** Speaker 2: <br />
*** Speaker 3: <br />
*** Speaker 4: <br />
*** Speaker 5: Ge "ChucK: new features, new bugs, new worlds (ChucKTrip?)"<br />
*** Speaker 6: <br />
*** Speaker 7: <br />
*** Speaker 8: <br />
** '''Conference Style Signups (15 min)'''<br />
*** Speaker 1: Marise van Zyl (rapid fire)<br />
*** Speaker 2: Prateek Verma<br />
*** Speaker 3: Fernando Lopez-Lezcano<br />
* '''5/26: [https://www.decontextualize.com/ Allison Parrish]:''' Poet and Programmer<br />
* '''6/02: [http://sashaleitman.com/about/ Sasha Leitman]:''' Physical Interaction Design for Music<br />
<br />
= Past - Winter Quarter (2021)=<br />
<br />
* 1/13: Break<br />
* '''1/20: Informal Hangout / Dance Party<br />
* '''1/27: <br />
* '''2/03: <br />
* '''2/10: CCRMA Town !! <br />
*'''2/17: Rapid-Fire Talks''' (5 min) - sign up here via your CCRMA login <br />
** Speaker 1: Kunwoo Kim<br />
** Speaker 2: John Chowning<br />
** Speaker 3: Noah Fram<br />
** Speaker 4: Camille Noufi<br />
** Speaker 5: Barbara Nerness<br />
** Speaker 6: (maybe) Julie Zhu<br />
** Speaker 7: Chris Chafe<br />
** Speaker 8: Lloyd May<br />
** Speaker 9: Mike Mulshine<br />
** Speaker 10: Ge Wang<br />
** Speaker 11: Jatin (hopefully)<br />
** Speaker 12: Alex Chechile<br />
** Speaker 13: Fernando Lopez-Lezcano<br />
** Speaker 14:<br />
** Speaker 15:<br />
* '''2/24:<br />
* '''3/03: Conference Style Talks''' (15-20 min) - sign up here via your CCRMA login<br />
** Speaker 1: Ty Sadlier<br />
** Speaker 2: Travis Skare<br />
** Speaker 3: Constantin Basica & Prateek Verma<br />
** Speaker 4: <br />
* '''3/10: Sasha Leitman<br />
* '''3/17: Break<br />
<br />
= Past - Autumn Quarter (2020)=<br />
<span style="color:red">'''In person colloquiua will not be held for the 2020 Autumn Quarter. All events will be held remotely.<br />
<br />
*'''9/16 New Student Introductions'''<br />
** Speaker 1: Lloyd May<br />
** Speaker 2: Andrew Zhu<br />
** Speaker 3: Kathleen Yuan<br />
** Speaker 4: Marise van Zyl<br />
** Speaker 5: Hannah Choi<br />
** Speaker 6: Joss Saltzman<br />
** Speaker 7: Champ Darabundit<br />
** Speaker 8: Clara Allison<br />
** Speaker 9: David Braun<br />
** Speaker 10: Austin Zambito-Valente<br />
<br />
*'''9/23 Faculty/Staff Introductions'''<br />
**Speaker 1: Jonathan Berger<br />
** Speaker 2: Ge Wang<br />
** Speaker 3: Takako Fujioka<br />
** Speaker 4: Seán O Dalaigh (new DMA)<br />
** Speaker 5: Eleanor Selfridge-Field<br />
** Speaker 6: Craig Stuart Sapp<br />
** Speaker 7: Blair Kaneshiro<br />
<br />
*'''9/30 Faculty/Staff Introductions'''<br />
** Speaker 1: Patricia Alessandrini (via video)<br />
** Speaker 2: Julius Smith<br />
** Speaker 3: Marina Bosi<br />
** Speaker 4: Nando (aka Fernando Lopez-Lezcano)<br />
** Speaker 5: Stephanie Sherriff<br />
** Speaker 6: Constantin Basica<br />
** Speaker 7: Matt Wright<br />
** Speaker 8: Chris Chafe<br />
<br />
*10/7 - Break<br />
<br />
*'''10/14 - Town Hall'''<br />
<br />
*'''10/21 - Adjunct Faculty Talks'''<br />
** Speaker 1: Malcolm Slaney<br />
** Speaker 2: Poppy Crum<br />
** Speaker 3: Paul Demarinis<br />
** Speaker 4: Jonathan Abel<br />
** Speaker 5: Doug James<br />
<br />
*11/4 - Break<br />
<br />
*'''11/11 - [https://www.justinsalamon.com/ Justin Salamon (Adobe / NYU)] [https://vimeo.com/480670893 (Watch Again)]'''<br />
<br />
*'''11/18 - Mona Shahnavaz'''<br />
<br />
ABSTRACT & BIO:<br />
Mona is an enthusiastic musician, whose focus and passion has been to<br />
share the joy of music with others. In 2018, a successful outcome of<br />
her innovative music program designed for senior citizens was the<br />
turning point for her to decide to change the course of learning piano<br />
in a less complex route. Her engineering background helped her to<br />
start working on the idea that bridges the gap between music and<br />
technology.<br />
<br />
The approach to fingering in music has always been and still is one of<br />
the major elements of success for keyboard players. Correct fingering<br />
assists the performer in delivering a better technical and musical<br />
performance. This research presents the best technique to generate<br />
fingering for any sequence of music notes. Dynamic programming and<br />
mathematics are major parts of this paper, they work alongside rules<br />
set by pianists to calculate the most practical fingerings for any<br />
musical passage.<br />
<br />
The ultimate goal is to facilitate the process of playing the piano<br />
using an AR platform. This is helpful for scaling music instructors<br />
and allows for efficient teaching. Through solving this problem,<br />
virtual instructions would be more productive and impactful. Success<br />
of this research applied in the AR field can be applied to robotic<br />
tasks in educational programs, video games, and medical fields.<br />
<br />
*11/25 - THANKSGIVING WEEK - Break</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Colloquium&diff=23457Colloquium2021-11-05T19:13:35Z<p>Mulshine: </p>
<hr />
<div>'''Wednesday 5:30pm PT (Zoom)<br />
<br />
The CCRMA Colloquium is a weekly gathering of CCRMA students, faculty, staff, and guests. It is an opportunity for members of the CCRMA community and invited speakers to share the work that they are doing in the fields of Computer Music, Audio Signal Processing and Music Information Retrieval, Psychoacoustics, and related fields. The colloquium traditionally happened every Wednesday during the academic year from 5:30 - 7:00pm and meets in the CCRMA Classroom, Knoll 217. During the pandemic, the colloquium has been held consistently via Zoom.<br />
<br />
'''The colloquium team for 2021-2022 is:<br/><br />
<br />
Kunwoo Kim - kunwoo@ccrma.stanford.edu <br /><br />
Lloyd May - lloyd@ccrma.stanford.edu <br /><br />
Mike Mulshine - mulshine@ccrma.stanford.edu <br /><br />
Marise Van Zyl - marise@ccrma.stanford.edu <br /><br />
<br /><br />
<br />
= Autumn Quarter (2021)=<br />
<br />
*'''9/22 New Student Introductions'''<br />
** Speaker 1: Kimia Koochakzadeh-Yazdi<br />
** Speaker 2: Taylor Goss<br />
** Speaker 3: Julia Mills<br />
** Speaker 4: Kiran Gandhi<br />
** Speaker 5: Dirk Roosenburg<br />
** Speaker 6: Aaron Hodges<br />
** Speaker 7: Nick Shaheed<br />
** Speaker 8: Nima Farzaneh<br />
** Speaker 9: Noah Berrie<br />
** Speaker 10: Angela Lee<br />
<br />
* '''9/29: [https://deutsch.ucsd.edu/ Diana Deutsch] || [https://stanford.zoom.us/rec/play/BqWvlQm3A56M40R7RacsoECfgLyDBceMAKCmv-reuiF9z2vqBe2zOQkuIhYcx5yas_qaI6rNwlJuEhHC.gNOkgLaCa1MoGAwy?continueMode=true&_x_zm_rtaid=pLyEfNNkT1ehEzQ0llwZ3w.1633020343192.a1088fb5aa65f3bfc1eec9f8cd66a807&_x_zm_rhtaid=200 Recording of talk] || [https://ccrma.stanford.edu/~cc/DDeutsch_CCRMA_Video_2021c.webm Pre-recorded lecture (Lightweight)] || [https://ccrma.stanford.edu/~cc/DDeutsch_CCRMA_Video_2021c.mov Pre-recorded lecture (Full-res)]''' <br />
<br />
*'''10/6 [https://stanford.zoom.us/rec/share/-dg0kA_9aJHEPYIahSBeG-5H4Vh9R6uT7M3qBfoBXEWQ69agHXeOyAgzBLskQoaF.rtlqEmUAd8b8p4tb Faculty/Staff Introductions Part 1]'''<br />
**Speaker 1: <br />
** Speaker 2: Matt<br />
** Speaker 3: Ge<br />
** Speaker 4: Takako<br />
** Speaker 5: [https://www.youtube.com/watch?v=rNGpbxAPU1c Julius]<br />
** Speaker 6: <br />
** Speaker 7: <br />
<br />
*'''10/13 Faculty/Staff Introductions Part 2'''<br />
** Speaker 1: Constantin<br />
** Speaker 2: Nando<br />
** Speaker 3: Jonathan (B)<br />
** Speaker 4: Jarek<br />
** Speaker 5: Nick<br />
** Speaker 6: Chris C<br />
** Speaker 7: Patricia<br />
** Speaker 8: Marina<br />
<br />
*'''10/20 - CCRMA Town Hall'''<br />
<br />
*'''10/27 ''' BREAK<br />
<br />
*'''11/03 ''' Social Event (TBD)<br />
<br />
*'''11/10 Rapid Fire & Conference Style '''(Sign-ups are open!)<br />
** Speaker 1: <br />
** Speaker 2: Travis Skare<br />
** Speaker 3: Champ Darabundit<br />
** Speaker 4: Eleanor Selfridge-Field<br />
** Speaker 5: Crag Stuart Sapp<br />
<br />
*'''11/17 ''' BREAK<br />
<br />
*'''12/1 - [hold - Matt]'''<br />
<br />
*--- Future Colloquia Already Booked ---- <br />
*01/12 - David Kanaga <br />
<br />
*01/26 - Walker Davis & Alex Mitchell (boomy) <br />
<br />
*04/27 - Mara Mills<br />
<br />
= Past - Spring Quarter (2021)=<br />
<br />
* '''3/31: Town Hall<br />
* '''4/07: CCRMA Open House Prep<br />
* '''4/14: <br />
* '''4/21: [[12pm]] - CCRMA Colloquium Phase Shift -1.88 degrees (social hang)<br />
* '''4/28: [http://www.avneeshsarwate.com/ Avneesh Sarwate]:''' Digital Audiovisual Interactive Media <br />
* '''5/05: Break! Rapid-Fire Talks Postponed to 5/19''' <br />
* '''5/12: [http://scattershot.org/ Jeff Snyder]:''' "Unusual Embedded Instruments"<br />
* '''5/19: Rapid Fire & Conference Style Talks''' - sign up here via your CCRMA login<br />
** '''Rapid Fire Signups (5 min)'''<br />
*** Speaker 1: CC waveguide mesh, part 2, realtime wavefield output<br />
*** Speaker 2: <br />
*** Speaker 3: <br />
*** Speaker 4: <br />
*** Speaker 5: Ge "ChucK: new features, new bugs, new worlds (ChucKTrip?)"<br />
*** Speaker 6: <br />
*** Speaker 7: <br />
*** Speaker 8: <br />
** '''Conference Style Signups (15 min)'''<br />
*** Speaker 1: Marise van Zyl (rapid fire)<br />
*** Speaker 2: Prateek Verma<br />
*** Speaker 3: Fernando Lopez-Lezcano<br />
* '''5/26: [https://www.decontextualize.com/ Allison Parrish]:''' Poet and Programmer<br />
* '''6/02: [http://sashaleitman.com/about/ Sasha Leitman]:''' Physical Interaction Design for Music<br />
<br />
= Past - Winter Quarter (2021)=<br />
<br />
* 1/13: Break<br />
* '''1/20: Informal Hangout / Dance Party<br />
* '''1/27: <br />
* '''2/03: <br />
* '''2/10: CCRMA Town !! <br />
*'''2/17: Rapid-Fire Talks''' (5 min) - sign up here via your CCRMA login <br />
** Speaker 1: Kunwoo Kim<br />
** Speaker 2: John Chowning<br />
** Speaker 3: Noah Fram<br />
** Speaker 4: Camille Noufi<br />
** Speaker 5: Barbara Nerness<br />
** Speaker 6: (maybe) Julie Zhu<br />
** Speaker 7: Chris Chafe<br />
** Speaker 8: Lloyd May<br />
** Speaker 9: Mike Mulshine<br />
** Speaker 10: Ge Wang<br />
** Speaker 11: Jatin (hopefully)<br />
** Speaker 12: Alex Chechile<br />
** Speaker 13: Fernando Lopez-Lezcano<br />
** Speaker 14:<br />
** Speaker 15:<br />
* '''2/24:<br />
* '''3/03: Conference Style Talks''' (15-20 min) - sign up here via your CCRMA login<br />
** Speaker 1: Ty Sadlier<br />
** Speaker 2: Travis Skare<br />
** Speaker 3: Constantin Basica & Prateek Verma<br />
** Speaker 4: <br />
* '''3/10: Sasha Leitman<br />
* '''3/17: Break<br />
<br />
= Past - Autumn Quarter (2020)=<br />
<span style="color:red">'''In person colloquiua will not be held for the 2020 Autumn Quarter. All events will be held remotely.<br />
<br />
*'''9/16 New Student Introductions'''<br />
** Speaker 1: Lloyd May<br />
** Speaker 2: Andrew Zhu<br />
** Speaker 3: Kathleen Yuan<br />
** Speaker 4: Marise van Zyl<br />
** Speaker 5: Hannah Choi<br />
** Speaker 6: Joss Saltzman<br />
** Speaker 7: Champ Darabundit<br />
** Speaker 8: Clara Allison<br />
** Speaker 9: David Braun<br />
** Speaker 10: Austin Zambito-Valente<br />
<br />
*'''9/23 Faculty/Staff Introductions'''<br />
**Speaker 1: Jonathan Berger<br />
** Speaker 2: Ge Wang<br />
** Speaker 3: Takako Fujioka<br />
** Speaker 4: Seán O Dalaigh (new DMA)<br />
** Speaker 5: Eleanor Selfridge-Field<br />
** Speaker 6: Craig Stuart Sapp<br />
** Speaker 7: Blair Kaneshiro<br />
<br />
*'''9/30 Faculty/Staff Introductions'''<br />
** Speaker 1: Patricia Alessandrini (via video)<br />
** Speaker 2: Julius Smith<br />
** Speaker 3: Marina Bosi<br />
** Speaker 4: Nando (aka Fernando Lopez-Lezcano)<br />
** Speaker 5: Stephanie Sherriff<br />
** Speaker 6: Constantin Basica<br />
** Speaker 7: Matt Wright<br />
** Speaker 8: Chris Chafe<br />
<br />
*10/7 - Break<br />
<br />
*'''10/14 - Town Hall'''<br />
<br />
*'''10/21 - Adjunct Faculty Talks'''<br />
** Speaker 1: Malcolm Slaney<br />
** Speaker 2: Poppy Crum<br />
** Speaker 3: Paul Demarinis<br />
** Speaker 4: Jonathan Abel<br />
** Speaker 5: Doug James<br />
<br />
*11/4 - Break<br />
<br />
*'''11/11 - [https://www.justinsalamon.com/ Justin Salamon (Adobe / NYU)] [https://vimeo.com/480670893 (Watch Again)]'''<br />
<br />
*'''11/18 - Mona Shahnavaz'''<br />
<br />
ABSTRACT & BIO:<br />
Mona is an enthusiastic musician, whose focus and passion has been to<br />
share the joy of music with others. In 2018, a successful outcome of<br />
her innovative music program designed for senior citizens was the<br />
turning point for her to decide to change the course of learning piano<br />
in a less complex route. Her engineering background helped her to<br />
start working on the idea that bridges the gap between music and<br />
technology.<br />
<br />
The approach to fingering in music has always been and still is one of<br />
the major elements of success for keyboard players. Correct fingering<br />
assists the performer in delivering a better technical and musical<br />
performance. This research presents the best technique to generate<br />
fingering for any sequence of music notes. Dynamic programming and<br />
mathematics are major parts of this paper, they work alongside rules<br />
set by pianists to calculate the most practical fingerings for any<br />
musical passage.<br />
<br />
The ultimate goal is to facilitate the process of playing the piano<br />
using an AR platform. This is helpful for scaling music instructors<br />
and allows for efficient teaching. Through solving this problem,<br />
virtual instructions would be more productive and impactful. Success<br />
of this research applied in the AR field can be applied to robotic<br />
tasks in educational programs, video games, and medical fields.<br />
<br />
*11/25 - THANKSGIVING WEEK - Break</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=23126Mulshine:320C2021-05-17T18:12:35Z<p>Mulshine: /* Week 7 */</p>
<hr />
<div>=Music 320C: Software Projects in Music/Audio Signal Processing=<br />
==My Ever-Evolving 320C Project Idea==<br />
<br />
I want to make a quirky, visually elaborate, "all-in-one," "one-stop" Vocal FX plugin for singer-songwriters, rappers, and other vocalists. <br />
<br />
<br />
I want the UI to feature many bright colors, shapes, aesthetically/socially meaningful artifacts, and require unconventional (but fun and intuitive) means of interaction to control the Vocal FX chain. Ideally there will be no numbers, sliders, knobs, meters, or other traditional plugin UI elements in my plugin. This is inspired by my interests in developing unique new audiovisual interactive interfaces for music consumption/distribution, being a singer-songwriter myself, and needing to often work on vocal production. Check out [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5] for some modal and aesthetic reference. <br />
<br />
<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I often make web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and a few spectral delay effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter. In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more.<br />
<br />
===Week 2===<br />
<br />
I got reacquainted with the JUCE plugin development toolchain: updated JUCE, downloaded the new Projucer, built some plugins in XCode, reminded myself how to debug plugins in my DAW via XCode, and more. <br />
<br />
<br />
I was recently interested in developing a de-esser plugin. I did some research on how de-essers work. [https://music.tutsplus.com/articles/how-to-create-a-de-esser-from-scratch-in-logic-or-any-daw-for-that-matter--audio-3793 This article] laid out how a de-esser might function in a very intuitive way, explaining the algorithm in terms of traditional plugins and fx chains. This is the algorithm they outline:<br />
# '''Apply parametric EQ (peak filter)''' to input signal with frequency between '''5-8 kHz''' and a '''very high Q (like 100+)'''. This will produce a signal that peaks really aggressively on sibilant "s" sounds, since "s" frequency content is packed in to the 5-8kHz range. <br />
# '''Compress original input signal with sibilant filtered signal as sidechain input.''' Because the sidechain input peaks aggressively on "s" sounds, the compressor will compress the original input only during those "s" sounds. The envelope of the compressor needs to be really quick to cut out "s" sounds as soon as they happen, and before they are perceptible. In experiments, I had to set the compressor's attack to approximately 0.15ms and release to 15-20ms to get smooth, immediately responsive compression. <br />
<br />
<br />
I'm not trying to build a whole plugin UI completely from scratch at this stage. Looking around for code to work from, I was lucky enough to find an [https://github.com/p-hlp/CTAGDRC open source dynamic range compressor JUCE plugin] by Creative Technologies. I ported a state-variable filter implementation (based on STK) from a previous project ([https://github.com/spiricom/LEAF/ LEAF] into this plugin and was able to get the de-esser algorithm above to work, along with a couple extra knobs and buttons in the UI. <br />
<br />
<br />
This side project in to de-essing will be useful in developing a collection of vocal FX for my 320C final project plugin.<br />
<br />
===Week 3 + 4===<br />
<br />
I spent a lot of time developing and fine-tuning my DeEsser plugin. <br />
<br />
I also spent some time implementing an autotune effect using the [https://github.com/spiricom/LEAF LEAF] library's tRetune class. I implemented a tuning system where root pitch and scale. <br />
<br />
I also started working through some JUCE animation tutorials.<br />
<br />
===Week 5 + 6===<br />
<br />
I decided to roll with the analogy of a landscape for my vocal FX plugin. <br />
<br />
So far I've implemented a couple features of this vocal FX landscape that I am (for now) calling "voxworld": <br />
<br />
# '''A basic EQ "horizon"''' in which a landscape's horizon can be drawn and redrawn to set the peaks of a constant Q equalizer with 20 bands (so far). I exported the dsp source for mth_octave_filterbank_demo(2) to C++ and ported the code in to a JUCE plugin setup to interface with the LEAF library. <br />
# '''Dots/"clouds"''' that appear, fade away, then reappear (continuously) when you click in the "sky." The more clouds, the higher the feedback gain in a delay line ("echo") effect. <br />
<br />
Next I need to think about ways to remove the clouds and integrate a variety of other effects. I want to incorporate autotune, choruses, harmonizers, basic reverbs and delays, formant shifters, etc. It will be a challenge to design intuitive ways of toggling on/off these effects as well as changing their relative mixes and other internal parameters without the use of traditional UI elements (sliders, knobs, numbers). I will do my best to deliver fun, easy-to-use/understand, visually compelling, and powerful control paradigms in the UI, but will likely also provide a super-editor mode that allows users to fine-tune certain effect details. <br />
<br />
I have already noticed some performance issues when I add lots of dots. Currently I'm not using the GPU to draw, so my next step in this journey is to understand how to use [https://docs.juce.com/master/tutorial_open_gl_application.html Open GL in a JUCE Plugin] and learn how to program some shaders... fun fun!<br />
<br />
===Week 7===<br />
I have been working on a variety of vocal effects using LEAF. These include:<br />
<br />
# '''Autotune'''<br />
# '''Harmonizer'''<br />
# '''Formant Shifter'''</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=23125Mulshine:320C2021-05-17T18:12:26Z<p>Mulshine: /* Week 5 + 6 */</p>
<hr />
<div>=Music 320C: Software Projects in Music/Audio Signal Processing=<br />
==My Ever-Evolving 320C Project Idea==<br />
<br />
I want to make a quirky, visually elaborate, "all-in-one," "one-stop" Vocal FX plugin for singer-songwriters, rappers, and other vocalists. <br />
<br />
<br />
I want the UI to feature many bright colors, shapes, aesthetically/socially meaningful artifacts, and require unconventional (but fun and intuitive) means of interaction to control the Vocal FX chain. Ideally there will be no numbers, sliders, knobs, meters, or other traditional plugin UI elements in my plugin. This is inspired by my interests in developing unique new audiovisual interactive interfaces for music consumption/distribution, being a singer-songwriter myself, and needing to often work on vocal production. Check out [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5] for some modal and aesthetic reference. <br />
<br />
<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I often make web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and a few spectral delay effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter. In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more.<br />
<br />
===Week 2===<br />
<br />
I got reacquainted with the JUCE plugin development toolchain: updated JUCE, downloaded the new Projucer, built some plugins in XCode, reminded myself how to debug plugins in my DAW via XCode, and more. <br />
<br />
<br />
I was recently interested in developing a de-esser plugin. I did some research on how de-essers work. [https://music.tutsplus.com/articles/how-to-create-a-de-esser-from-scratch-in-logic-or-any-daw-for-that-matter--audio-3793 This article] laid out how a de-esser might function in a very intuitive way, explaining the algorithm in terms of traditional plugins and fx chains. This is the algorithm they outline:<br />
# '''Apply parametric EQ (peak filter)''' to input signal with frequency between '''5-8 kHz''' and a '''very high Q (like 100+)'''. This will produce a signal that peaks really aggressively on sibilant "s" sounds, since "s" frequency content is packed in to the 5-8kHz range. <br />
# '''Compress original input signal with sibilant filtered signal as sidechain input.''' Because the sidechain input peaks aggressively on "s" sounds, the compressor will compress the original input only during those "s" sounds. The envelope of the compressor needs to be really quick to cut out "s" sounds as soon as they happen, and before they are perceptible. In experiments, I had to set the compressor's attack to approximately 0.15ms and release to 15-20ms to get smooth, immediately responsive compression. <br />
<br />
<br />
I'm not trying to build a whole plugin UI completely from scratch at this stage. Looking around for code to work from, I was lucky enough to find an [https://github.com/p-hlp/CTAGDRC open source dynamic range compressor JUCE plugin] by Creative Technologies. I ported a state-variable filter implementation (based on STK) from a previous project ([https://github.com/spiricom/LEAF/ LEAF] into this plugin and was able to get the de-esser algorithm above to work, along with a couple extra knobs and buttons in the UI. <br />
<br />
<br />
This side project in to de-essing will be useful in developing a collection of vocal FX for my 320C final project plugin.<br />
<br />
===Week 3 + 4===<br />
<br />
I spent a lot of time developing and fine-tuning my DeEsser plugin. <br />
<br />
I also spent some time implementing an autotune effect using the [https://github.com/spiricom/LEAF LEAF] library's tRetune class. I implemented a tuning system where root pitch and scale. <br />
<br />
I also started working through some JUCE animation tutorials.<br />
<br />
===Week 5 + 6===<br />
<br />
I decided to roll with the analogy of a landscape for my vocal FX plugin. <br />
<br />
So far I've implemented a couple features of this vocal FX landscape that I am (for now) calling "voxworld": <br />
<br />
# '''A basic EQ "horizon"''' in which a landscape's horizon can be drawn and redrawn to set the peaks of a constant Q equalizer with 20 bands (so far). I exported the dsp source for mth_octave_filterbank_demo(2) to C++ and ported the code in to a JUCE plugin setup to interface with the LEAF library. <br />
# '''Dots/"clouds"''' that appear, fade away, then reappear (continuously) when you click in the "sky." The more clouds, the higher the feedback gain in a delay line ("echo") effect. <br />
<br />
Next I need to think about ways to remove the clouds and integrate a variety of other effects. I want to incorporate autotune, choruses, harmonizers, basic reverbs and delays, formant shifters, etc. It will be a challenge to design intuitive ways of toggling on/off these effects as well as changing their relative mixes and other internal parameters without the use of traditional UI elements (sliders, knobs, numbers). I will do my best to deliver fun, easy-to-use/understand, visually compelling, and powerful control paradigms in the UI, but will likely also provide a super-editor mode that allows users to fine-tune certain effect details. <br />
<br />
I have already noticed some performance issues when I add lots of dots. Currently I'm not using the GPU to draw, so my next step in this journey is to understand how to use [https://docs.juce.com/master/tutorial_open_gl_application.html Open GL in a JUCE Plugin] and learn how to program some shaders... fun fun!<br />
<br />
===Week 7===<br />
I have been working on a variety of vocal effects using LEAF. These include:<br />
<br />
# '''Autotune'''<br />
<br />
# '''Harmonizer'''<br />
<br />
# '''Formant Shifter'''</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=23124Mulshine:320C2021-05-17T18:12:04Z<p>Mulshine: /* My Ever-Evolving 320C Project Idea */</p>
<hr />
<div>=Music 320C: Software Projects in Music/Audio Signal Processing=<br />
==My Ever-Evolving 320C Project Idea==<br />
<br />
I want to make a quirky, visually elaborate, "all-in-one," "one-stop" Vocal FX plugin for singer-songwriters, rappers, and other vocalists. <br />
<br />
<br />
I want the UI to feature many bright colors, shapes, aesthetically/socially meaningful artifacts, and require unconventional (but fun and intuitive) means of interaction to control the Vocal FX chain. Ideally there will be no numbers, sliders, knobs, meters, or other traditional plugin UI elements in my plugin. This is inspired by my interests in developing unique new audiovisual interactive interfaces for music consumption/distribution, being a singer-songwriter myself, and needing to often work on vocal production. Check out [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5] for some modal and aesthetic reference. <br />
<br />
<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I often make web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and a few spectral delay effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter. In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more.<br />
<br />
===Week 2===<br />
<br />
I got reacquainted with the JUCE plugin development toolchain: updated JUCE, downloaded the new Projucer, built some plugins in XCode, reminded myself how to debug plugins in my DAW via XCode, and more. <br />
<br />
<br />
I was recently interested in developing a de-esser plugin. I did some research on how de-essers work. [https://music.tutsplus.com/articles/how-to-create-a-de-esser-from-scratch-in-logic-or-any-daw-for-that-matter--audio-3793 This article] laid out how a de-esser might function in a very intuitive way, explaining the algorithm in terms of traditional plugins and fx chains. This is the algorithm they outline:<br />
# '''Apply parametric EQ (peak filter)''' to input signal with frequency between '''5-8 kHz''' and a '''very high Q (like 100+)'''. This will produce a signal that peaks really aggressively on sibilant "s" sounds, since "s" frequency content is packed in to the 5-8kHz range. <br />
# '''Compress original input signal with sibilant filtered signal as sidechain input.''' Because the sidechain input peaks aggressively on "s" sounds, the compressor will compress the original input only during those "s" sounds. The envelope of the compressor needs to be really quick to cut out "s" sounds as soon as they happen, and before they are perceptible. In experiments, I had to set the compressor's attack to approximately 0.15ms and release to 15-20ms to get smooth, immediately responsive compression. <br />
<br />
<br />
I'm not trying to build a whole plugin UI completely from scratch at this stage. Looking around for code to work from, I was lucky enough to find an [https://github.com/p-hlp/CTAGDRC open source dynamic range compressor JUCE plugin] by Creative Technologies. I ported a state-variable filter implementation (based on STK) from a previous project ([https://github.com/spiricom/LEAF/ LEAF] into this plugin and was able to get the de-esser algorithm above to work, along with a couple extra knobs and buttons in the UI. <br />
<br />
<br />
This side project in to de-essing will be useful in developing a collection of vocal FX for my 320C final project plugin.<br />
<br />
===Week 3 + 4===<br />
<br />
I spent a lot of time developing and fine-tuning my DeEsser plugin. <br />
<br />
I also spent some time implementing an autotune effect using the [https://github.com/spiricom/LEAF LEAF] library's tRetune class. I implemented a tuning system where root pitch and scale. <br />
<br />
I also started working through some JUCE animation tutorials.<br />
<br />
===Week 5 + 6===<br />
<br />
I decided to roll with the analogy of a landscape for my vocal FX plugin. <br />
<br />
So far I've implemented a couple features of this vocal FX landscape that I am (for now) calling "voxworld": <br />
<br />
# '''A basic EQ "horizon"''' in which a landscape's horizon can be drawn and redrawn to set the peaks of a constant Q equalizer with 20 bands (so far). I exported the dsp source for mth_octave_filterbank_demo(2) to C++ and ported the code in to a JUCE plugin setup to interface with the LEAF library. <br />
<br />
# '''Dots/"clouds"''' that appear, fade away, then reappear (continuously) when you click in the "sky." The more clouds, the higher the feedback gain in a delay line ("echo") effect. <br />
<br />
Next I need to think about ways to remove the clouds and integrate a variety of other effects. I want to incorporate autotune, choruses, harmonizers, basic reverbs and delays, formant shifters, etc. It will be a challenge to design intuitive ways of toggling on/off these effects as well as changing their relative mixes and other internal parameters without the use of traditional UI elements (sliders, knobs, numbers). I will do my best to deliver fun, easy-to-use/understand, visually compelling, and powerful control paradigms in the UI, but will likely also provide a super-editor mode that allows users to fine-tune certain effect details. <br />
<br />
I have already noticed some performance issues when I add lots of dots. Currently I'm not using the GPU to draw, so my next step in this journey is to understand how to use [https://docs.juce.com/master/tutorial_open_gl_application.html Open GL in a JUCE Plugin] and learn how to program some shaders... fun fun!<br />
<br />
===Week 7===<br />
I have been working on a variety of vocal effects using LEAF. These include:<br />
<br />
# '''Autotune'''<br />
<br />
# '''Harmonizer'''<br />
<br />
# '''Formant Shifter'''</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=23089Mulshine:320C2021-05-05T21:53:23Z<p>Mulshine: /* Week 5 + 6 */</p>
<hr />
<div>=Music 320C: Software Projects in Music/Audio Signal Processing=<br />
==My Ever-Evolving 320C Project Idea==<br />
<br />
I want to make a quirky, visually elaborate, "all-in-one," "one-stop" Vocal FX plugin for singer-songwriters, rappers, and other vocalists. <br />
<br />
<br />
I want the UI to feature many bright colors, shapes, aesthetically/socially meaningful artifacts, and require unconventional (but fun and intuitive) means of interaction to control the Vocal FX chain. Ideally there will be no numbers, sliders, knobs, meters, or other traditional plugin UI elements in my plugin. This is inspired by my interests in developing unique new audiovisual interactive interfaces for music consumption/distribution, being a singer-songwriter myself, and needing to often work on vocal production. Check out [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5] for some modal and aesthetic reference. <br />
<br />
<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I often make web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and a few spectral delay effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter. In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more.<br />
<br />
===Week 2===<br />
<br />
I got reacquainted with the JUCE plugin development toolchain: updated JUCE, downloaded the new Projucer, built some plugins in XCode, reminded myself how to debug plugins in my DAW via XCode, and more. <br />
<br />
<br />
I was recently interested in developing a de-esser plugin. I did some research on how de-essers work. [https://music.tutsplus.com/articles/how-to-create-a-de-esser-from-scratch-in-logic-or-any-daw-for-that-matter--audio-3793 This article] laid out how a de-esser might function in a very intuitive way, explaining the algorithm in terms of traditional plugins and fx chains. This is the algorithm they outline:<br />
# '''Apply parametric EQ (peak filter)''' to input signal with frequency between '''5-8 kHz''' and a '''very high Q (like 100+)'''. This will produce a signal that peaks really aggressively on sibilant "s" sounds, since "s" frequency content is packed in to the 5-8kHz range. <br />
# '''Compress original input signal with sibilant filtered signal as sidechain input.''' Because the sidechain input peaks aggressively on "s" sounds, the compressor will compress the original input only during those "s" sounds. The envelope of the compressor needs to be really quick to cut out "s" sounds as soon as they happen, and before they are perceptible. In experiments, I had to set the compressor's attack to approximately 0.15ms and release to 15-20ms to get smooth, immediately responsive compression. <br />
<br />
<br />
I'm not trying to build a whole plugin UI completely from scratch at this stage. Looking around for code to work from, I was lucky enough to find an [https://github.com/p-hlp/CTAGDRC open source dynamic range compressor JUCE plugin] by Creative Technologies. I ported a state-variable filter implementation (based on STK) from a previous project ([https://github.com/spiricom/LEAF/ LEAF] into this plugin and was able to get the de-esser algorithm above to work, along with a couple extra knobs and buttons in the UI. <br />
<br />
<br />
This side project in to de-essing will be useful in developing a collection of vocal FX for my 320C final project plugin.<br />
<br />
===Week 3 + 4===<br />
<br />
I spent a lot of time developing and fine-tuning my DeEsser plugin. <br />
<br />
I also spent some time implementing an autotune effect using the [https://github.com/spiricom/LEAF LEAF] library's tRetune class. I implemented a tuning system where root pitch and scale. <br />
<br />
I also started working through some JUCE animation tutorials.<br />
<br />
===Week 5 + 6===<br />
<br />
I decided to roll with the analogy of a landscape for my vocal FX plugin. <br />
<br />
So far I've implemented a couple features of this vocal FX landscape that I am (for now) calling "voxworld": <br />
<br />
# '''A basic EQ "horizon"''' in which a landscape's horizon can be drawn and redrawn to set the peaks of a constant Q equalizer with 20 bands (so far). I exported the dsp source for mth_octave_filterbank_demo(2) to C++ and ported the code in to a JUCE plugin setup to interface with the LEAF library. <br />
<br />
# '''Dots/"clouds"''' that appear, fade away, then reappear (continuously) when you click in the "sky." The more clouds, the higher the feedback gain in a delay line ("echo") effect. <br />
<br />
Next I need to think about ways to remove the clouds and integrate a variety of other effects. I want to incorporate autotune, choruses, harmonizers, basic reverbs and delays, formant shifters, etc. It will be a challenge to design intuitive ways of toggling on/off these effects as well as changing their relative mixes and other internal parameters without the use of traditional UI elements (sliders, knobs, numbers). I will do my best to deliver fun, easy-to-use/understand, visually compelling, and powerful control paradigms in the UI, but will likely also provide a super-editor mode that allows users to fine-tune certain effect details. <br />
<br />
I have already noticed some performance issues when I add lots of dots. Currently I'm not using the GPU to draw, so my next step in this journey is to understand how to use [https://docs.juce.com/master/tutorial_open_gl_application.html Open GL in a JUCE Plugin] and learn how to program some shaders... fun fun!</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=23088Mulshine:320C2021-05-05T21:51:08Z<p>Mulshine: </p>
<hr />
<div>=Music 320C: Software Projects in Music/Audio Signal Processing=<br />
==My Ever-Evolving 320C Project Idea==<br />
<br />
I want to make a quirky, visually elaborate, "all-in-one," "one-stop" Vocal FX plugin for singer-songwriters, rappers, and other vocalists. <br />
<br />
<br />
I want the UI to feature many bright colors, shapes, aesthetically/socially meaningful artifacts, and require unconventional (but fun and intuitive) means of interaction to control the Vocal FX chain. Ideally there will be no numbers, sliders, knobs, meters, or other traditional plugin UI elements in my plugin. This is inspired by my interests in developing unique new audiovisual interactive interfaces for music consumption/distribution, being a singer-songwriter myself, and needing to often work on vocal production. Check out [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5] for some modal and aesthetic reference. <br />
<br />
<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I often make web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and a few spectral delay effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter. In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more.<br />
<br />
===Week 2===<br />
<br />
I got reacquainted with the JUCE plugin development toolchain: updated JUCE, downloaded the new Projucer, built some plugins in XCode, reminded myself how to debug plugins in my DAW via XCode, and more. <br />
<br />
<br />
I was recently interested in developing a de-esser plugin. I did some research on how de-essers work. [https://music.tutsplus.com/articles/how-to-create-a-de-esser-from-scratch-in-logic-or-any-daw-for-that-matter--audio-3793 This article] laid out how a de-esser might function in a very intuitive way, explaining the algorithm in terms of traditional plugins and fx chains. This is the algorithm they outline:<br />
# '''Apply parametric EQ (peak filter)''' to input signal with frequency between '''5-8 kHz''' and a '''very high Q (like 100+)'''. This will produce a signal that peaks really aggressively on sibilant "s" sounds, since "s" frequency content is packed in to the 5-8kHz range. <br />
# '''Compress original input signal with sibilant filtered signal as sidechain input.''' Because the sidechain input peaks aggressively on "s" sounds, the compressor will compress the original input only during those "s" sounds. The envelope of the compressor needs to be really quick to cut out "s" sounds as soon as they happen, and before they are perceptible. In experiments, I had to set the compressor's attack to approximately 0.15ms and release to 15-20ms to get smooth, immediately responsive compression. <br />
<br />
<br />
I'm not trying to build a whole plugin UI completely from scratch at this stage. Looking around for code to work from, I was lucky enough to find an [https://github.com/p-hlp/CTAGDRC open source dynamic range compressor JUCE plugin] by Creative Technologies. I ported a state-variable filter implementation (based on STK) from a previous project ([https://github.com/spiricom/LEAF/ LEAF] into this plugin and was able to get the de-esser algorithm above to work, along with a couple extra knobs and buttons in the UI. <br />
<br />
<br />
This side project in to de-essing will be useful in developing a collection of vocal FX for my 320C final project plugin.<br />
<br />
===Week 3 + 4===<br />
<br />
I spent a lot of time developing and fine-tuning my DeEsser plugin. <br />
<br />
I also spent some time implementing an autotune effect using the [https://github.com/spiricom/LEAF LEAF] library's tRetune class. I implemented a tuning system where root pitch and scale. <br />
<br />
I also started working through some JUCE animation tutorials.<br />
<br />
===Week 5 + 6===<br />
<br />
I decided to roll with the analogy of a landscape for my vocal FX plugin. <br />
<br />
So far I've implemented a couple features of this vocal FX landscape that I am (for now) calling "voxworld": <br />
<br />
# A basic EQ "horizon" in which a landscape's horizon can be drawn and redrawn to set the peaks of a constant Q equalizer with 20 bands (so far). I exported the dsp source for mth_octave_filterbank_demo(2) to C++ and ported the code in to a JUCE plugin setup to interface with the LEAF library. <br />
<br />
# Dots/"clouds" that appear, fade away, then reappear (continuously) when you click in the "sky." The more clouds, the higher the feedback gain in a delay line ("echo") effect. <br />
<br />
Next I need to think about ways to remove the clouds and integrate a variety of other effects. I want to incorporate autotune, choruses, harmonizers, basic reverbs and delays, formant shifters, etc. It will be a challenge to design intuitive ways of toggling on/off these effects as well as changing their relative mixes and other internal parameters without the use of traditional UI elements (sliders, knobs, numbers). I will do my best to deliver fun, easy-to-use/understand, visually compelling, and powerful control paradigms in the UI, but will likely also provide a super-editor mode that allows users to fine-tune certain effect details. <br />
<br />
I have already noticed some performance issues when I add lots of dots. Currently I'm not using the GPU to draw, so my next step in this journey is to understand how to use [https://docs.juce.com/master/tutorial_open_gl_application.html Open GL in a JUCE Plugin] and learn how to program some shaders... fun fun!</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=23080Mulshine:320C2021-05-03T21:28:37Z<p>Mulshine: </p>
<hr />
<div>=Music 320C: Software Projects in Music/Audio Signal Processing=<br />
==My Ever-Evolving 320C Project Idea==<br />
<br />
I want to make a quirky, visually elaborate, "all-in-one," "one-stop" Vocal FX plugin for singer-songwriters, rappers, and other vocalists. <br />
<br />
<br />
I want the UI to feature many bright colors, shapes, aesthetically/socially meaningful artifacts, and require unconventional (but fun and intuitive) means of interaction to control the Vocal FX chain. Ideally there will be no numbers, sliders, knobs, meters, or other traditional plugin UI elements in my plugin. This is inspired by my interests in developing unique new audiovisual interactive interfaces for music consumption/distribution, being a singer-songwriter myself, and needing to often work on vocal production. Check out [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5] for some modal and aesthetic reference. <br />
<br />
<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I often make web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and a few spectral delay effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter. In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more.<br />
<br />
===Week 2===<br />
<br />
I got reacquainted with the JUCE plugin development toolchain: updated JUCE, downloaded the new Projucer, built some plugins in XCode, reminded myself how to debug plugins in my DAW via XCode, and more. <br />
<br />
<br />
I was recently interested in developing a de-esser plugin. I did some research on how de-essers work. [https://music.tutsplus.com/articles/how-to-create-a-de-esser-from-scratch-in-logic-or-any-daw-for-that-matter--audio-3793 This article] laid out how a de-esser might function in a very intuitive way, explaining the algorithm in terms of traditional plugins and fx chains. This is the algorithm they outline:<br />
# '''Apply parametric EQ (peak filter)''' to input signal with frequency between '''5-8 kHz''' and a '''very high Q (like 100+)'''. This will produce a signal that peaks really aggressively on sibilant "s" sounds, since "s" frequency content is packed in to the 5-8kHz range. <br />
# '''Compress original input signal with sibilant filtered signal as sidechain input.''' Because the sidechain input peaks aggressively on "s" sounds, the compressor will compress the original input only during those "s" sounds. The envelope of the compressor needs to be really quick to cut out "s" sounds as soon as they happen, and before they are perceptible. In experiments, I had to set the compressor's attack to approximately 0.15ms and release to 15-20ms to get smooth, immediately responsive compression. <br />
<br />
<br />
I'm not trying to build a whole plugin UI completely from scratch at this stage. Looking around for code to work from, I was lucky enough to find an [https://github.com/p-hlp/CTAGDRC open source dynamic range compressor JUCE plugin] by Creative Technologies. I ported a state-variable filter implementation (based on STK) from a previous project ([https://github.com/spiricom/LEAF/ LEAF] into this plugin and was able to get the de-esser algorithm above to work, along with a couple extra knobs and buttons in the UI. <br />
<br />
<br />
This side project in to de-essing will be useful in developing a collection of vocal FX for my 320C final project plugin.<br />
<br />
===Week 3 + 4===<br />
<br />
I spent a lot of time developing and fine-tuning my DeEsser plugin. <br />
<br />
I also spent some time implementing an autotune effect using the [https://github.com/spiricom/LEAF LEAF] library's tRetune class. I implemented a tuning system where root pitch and scale. <br />
<br />
I also started working through some JUCE animation tutorials.<br />
<br />
===Week 5===<br />
<br />
I decided to roll with the analogy of a landscape for my vocal FX plugin. The horizon will represent the EQ curves and dots/clouds in the sky will correspond to a varying delay effect.</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Colloquium&diff=23008Colloquium2021-04-26T23:02:37Z<p>Mulshine: /* Spring Quarter (2021) */</p>
<hr />
<div>'''Wednesday 5:30pm PT (Zoom)<br />
<br />
The CCRMA Colloquium is a weekly gathering of CCRMA students, faculty, staff, and guests. It is an opportunity for members of the CCRMA community and invited speakers to share the work that they are doing in the fields of Computer Music, Audio Signal Processing and Music Information Retrieval, Psychoacoustics, and related fields. The colloquium traditionally happened every Wednesday during the academic year from 5:30 - 7:00pm and meets in the CCRMA Classroom, Knoll 217. During the pandemic, the colloquium has been held consistently via Zoom.<br />
<br />
'''The colloquium team for 2020-2021 is:<br/><br />
<br />
Barbara Nerness - bnerness@ccrma.stanford.edu <br /><br />
Kunwoo Kim - kunwoo@ccrma.stanford.edu <br /><br />
Mike Mulshine - mrmulshine@ccrma.stanford.edu <br /><br />
Camille Noufi - cnoufi@ccrma.stanford.edu <br /><br />
<br /><br />
<br />
= Spring Quarter (2021)=<br />
<br />
* '''3/31: Town Hall<br />
* '''4/07: CCRMA Open House Prep<br />
* '''4/14: <br />
* '''4/21: [[12pm]] - CCRMA Colloquium Phase Shift -1.88 degrees (social hang)<br />
* '''4/28: [http://www.avneeshsarwate.com/ Avneesh Sarwate]:''' Digital Audiovisual Interactive Media <br />
* '''5/05: Rapid-Fire Talks''' (5 min) - sign up here via your CCRMA login <br />
** Speaker 1: <br />
** Speaker 2: <br />
** Speaker 3: <br />
** Speaker 4: <br />
** Speaker 5: Ge: "ChucKTrip"<br />
** Speaker 6: <br />
** Speaker 7: <br />
** Speaker 8: <br />
** Speaker 9: <br />
** Speaker 10: <br />
** Speaker 11: <br />
** Speaker 12: <br />
** Speaker 13: <br />
** Speaker 14:<br />
** Speaker 15:<br />
* '''5/12: [http://scattershot.org/ Jeff Snyder]:''' "Unusual Embedded Instruments"<br />
* '''5/19: Conference Style Talks''' (15-20 min) - sign up here via your CCRMA login<br />
** Speaker 1: Marise van Zyl<br />
** Speaker 2: <br />
** Speaker 3: <br />
* '''5/26: [https://www.decontextualize.com/ Allison Parrish]:''' Poet and Programmer<br />
* '''6/02: [http://sashaleitman.com/about/ Sasha Leitman]:''' Physical Interaction Design for Music<br />
<br />
= Past - Winter Quarter (2021)=<br />
<br />
* 1/13: Break<br />
* '''1/20: Informal Hangout / Dance Party<br />
* '''1/27: <br />
* '''2/03: <br />
* '''2/10: CCRMA Town !! <br />
*'''2/17: Rapid-Fire Talks''' (5 min) - sign up here via your CCRMA login <br />
** Speaker 1: Kunwoo Kim<br />
** Speaker 2: John Chowning<br />
** Speaker 3: Noah Fram<br />
** Speaker 4: Camille Noufi<br />
** Speaker 5: Barbara Nerness<br />
** Speaker 6: (maybe) Julie Zhu<br />
** Speaker 7: Chris Chafe<br />
** Speaker 8: Lloyd May<br />
** Speaker 9: Mike Mulshine<br />
** Speaker 10: Ge Wang<br />
** Speaker 11: Jatin (hopefully)<br />
** Speaker 12: Alex Chechile<br />
** Speaker 13: Fernando Lopez-Lezcano<br />
** Speaker 14:<br />
** Speaker 15:<br />
* '''2/24:<br />
* '''3/03: Conference Style Talks''' (15-20 min) - sign up here via your CCRMA login<br />
** Speaker 1: Ty Sadlier<br />
** Speaker 2: Travis Skare<br />
** Speaker 3: Constantin Basica & Prateek Verma<br />
** Speaker 4: <br />
* '''3/10: Sasha Leitman<br />
* '''3/17: Break<br />
<br />
= Past - Autumn Quarter (2020)=<br />
<span style="color:red">'''In person colloquiua will not be held for the 2020 Autumn Quarter. All events will be held remotely.<br />
<br />
*'''9/16 New Student Introductions'''<br />
** Speaker 1: Lloyd May<br />
** Speaker 2: Andrew Zhu<br />
** Speaker 3: Kathleen Yuan<br />
** Speaker 4: Marise van Zyl<br />
** Speaker 5: Hannah Choi<br />
** Speaker 6: Joss Saltzman<br />
** Speaker 7: Champ Darabundit<br />
** Speaker 8: Clara Allison<br />
** Speaker 9: David Braun<br />
** Speaker 10: Austin Zambito-Valente<br />
<br />
*'''9/23 Faculty/Staff Introductions'''<br />
**Speaker 1: Jonathan Berger<br />
** Speaker 2: Ge Wang<br />
** Speaker 3: Takako Fujioka<br />
** Speaker 4: Seán O Dalaigh (new DMA)<br />
** Speaker 5: Eleanor Selfridge-Field<br />
** Speaker 6: Craig Stuart Sapp<br />
** Speaker 7: Blair Kaneshiro<br />
<br />
*'''9/30 Faculty/Staff Introductions'''<br />
** Speaker 1: Patricia Alessandrini (via video)<br />
** Speaker 2: Julius Smith<br />
** Speaker 3: Marina Bosi<br />
** Speaker 4: Nando (aka Fernando Lopez-Lezcano)<br />
** Speaker 5: Stephanie Sherriff<br />
** Speaker 6: Constantin Basica<br />
** Speaker 7: Matt Wright<br />
** Speaker 8: Chris Chafe<br />
<br />
*10/7 - Break<br />
<br />
*'''10/14 - Town Hall'''<br />
<br />
*'''10/21 - Adjunct Faculty Talks'''<br />
** Speaker 1: Malcolm Slaney<br />
** Speaker 2: Poppy Crum<br />
** Speaker 3: Paul Demarinis<br />
** Speaker 4: Jonathan Abel<br />
** Speaker 5: Doug James<br />
<br />
*11/4 - Break<br />
<br />
*'''11/11 - [https://www.justinsalamon.com/ Justin Salamon (Adobe / NYU)] [https://vimeo.com/480670893 (Watch Again)]'''<br />
<br />
*'''11/18 - Mona Shahnavaz'''<br />
<br />
ABSTRACT & BIO:<br />
Mona is an enthusiastic musician, whose focus and passion has been to<br />
share the joy of music with others. In 2018, a successful outcome of<br />
her innovative music program designed for senior citizens was the<br />
turning point for her to decide to change the course of learning piano<br />
in a less complex route. Her engineering background helped her to<br />
start working on the idea that bridges the gap between music and<br />
technology.<br />
<br />
The approach to fingering in music has always been and still is one of<br />
the major elements of success for keyboard players. Correct fingering<br />
assists the performer in delivering a better technical and musical<br />
performance. This research presents the best technique to generate<br />
fingering for any sequence of music notes. Dynamic programming and<br />
mathematics are major parts of this paper, they work alongside rules<br />
set by pianists to calculate the most practical fingerings for any<br />
musical passage.<br />
<br />
The ultimate goal is to facilitate the process of playing the piano<br />
using an AR platform. This is helpful for scaling music instructors<br />
and allows for efficient teaching. Through solving this problem,<br />
virtual instructions would be more productive and impactful. Success<br />
of this research applied in the AR field can be applied to robotic<br />
tasks in educational programs, video games, and medical fields.<br />
<br />
*11/25 - THANKSGIVING WEEK - Break</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=220a-spring-2021/hw1&diff=22976220a-spring-2021/hw12021-04-18T17:01:24Z<p>Mulshine: /* Instructions */</p>
<hr />
<div>= Homework #1: Hand-Crafted Digital Audio =<br />
<br />
[[File:Hand-craft-weave.jpg|512px|a woman weaves a colorful pattern using a machine]]<br />
<br />
=== Due Date ===<br />
* 2021.4.13 11:59:59pm, Tuesday<br />
<br />
<br />
=== Instructions ===<br />
'''(Part 1)''' Generate several signals in ChucK: impulse, noise, sine, triangle, square waves, modulating pulse widths, with varying envelopes (ADSR etc), with and without low-pass filter (LPF). Bring these into Audacity as WAV files and look at their waveforms (time domain) and spectrograms (time/frequency domain).<br />
<br />
a. (1a-noise.ck) Write a ChucK program that continuously generates white noise (Noise). Stop the program manually after a finite period of time.<br />
<br />
b. (1b-adsr.ck) Same thing, but apply an amplitude envelope (ADSR) with the following parameters: attack=10ms, decay=40ms, sustain=.5, release=100ms. Do the following in an infinite loop { start the envelope with .keyOn(), then 2::seconds later stop it with .keyOff(), wait for another 2::seconds }. Stop the program manually after a finite period of time.<br />
<br />
c. (1c-noise-filtersweep.ck) Write another ChucK program that is the same as (b), but with a LPF whose cutoff frequency is sweeping up and down between 100 to 800 Hz using a loop that is updating the LPF’s .freq parameter every 10::ms; hints: you could use Math.sin() to compute the time-varying cutoff frequencies, but your program can sweep the cutoff frequency with any shape and timing you desire. To perform a smooth sweep between LPF freq values, you may find it useful to either create a while/for loop OR use the Envelope class. If you use the Envelope class, you may need to spork the function that updates the frequency of the LPF.<br />
<br />
d. (1d-sine-sweep.ck) Write a ChucK program that generates a sine wave (SinOsc) that sweeps its frequency from 30 to 3000 Hz over 3 seconds smoothly at an time interval of 10::ms<br />
<br />
e. (1e-sine-sweep+LPF.ck) Write another ChucK program that is the same as above, but with a LPF whose cutoff frequency is 500 Hz<br />
<br />
f. (1f-sqrosc.ck) Next write another ChucK program, same as (e), but with a square wave.<br />
<br />
g. (1g-sqrosc-sweep.ck) Next write another ChucK program, like (f), but with the square wave’s frequency constant at 220 Hertz and the LPF frequency sweeping to your taste, along the lines of (c).<br />
<br />
h. (1h-q.ck)Same as (g) but increase the LPF’s .Q parameter to at least 3.<br />
<br />
i. Using Mini-Audicle’s File->Export feature OR the recording program (rec.ck or rec-auto.ck), record a sound file from each of your eight programs (a) through (h).<br />
<br />
j. Open each sound file in Audacity and look at their waveforms (time domain) and spectrograms (time/freq domain).<br />
<br />
k. Reflect on what you see: how do these visual representations correspond or not to “what you hear”? <br />
<br />
l. Take a screenshot of the one waveform or spectrogram you find most interesting or illuminating. Mention something about it in your written reflection.<br />
<br />
m. On your HW1 webpage, include your code from (a)-(h), and your screenshot + reflections from (k) and (l).<br />
<br />
'''(Part 2)''' Open a digital audio file that’s meaningful to you (e.g., voice mail from a friend/relative) in Audacity and Paulstretch it however you like. Save the output to a wav file.<br />
<br />
'''(Part 3)''' Make a digital sample “by hand”, writing each successive value (a number between -1.0 and +1.0) in an array. You must put between 25 and 250000 numbers into the array (“your sample must contain at least 25 samples”). (Hint: shorter sounds will be less work.) Record your sample to a wav file. (Hint: you might open the wav file in Audacity and prepare it—make sure it isn’t clipping; trimming out extra silence; boost the volume)<br />
<br />
'''(Part 4)''' Write a ChucK program to load your generated sample from (3) into a SndBuf and play it at varying amplitudes, rates (=pitch+timbre shift), and timings.<br />
<br />
'''(Part 5)''' Craft a short musical statement of 30-90 seconds, combining elements from parts 1, 2, and 3/4.<br />
<br />
=== Deliverables ===<br />
'''turn in all files by putting them in your 220a CCRMA webpage and submit ONLY your webpage URL to Canvas'''<br />
<br />
Your webpage should include:<br />
* 1) your hw1 should live at https://ccrma.stanford.edu/~YOURID/220a/hw1<br />
* 2) ChucK (.ck) files, as applicable, for Parts 1 through 5<br />
* 3) sound (.wav) files, as applicable, for Parts 1 through 5<br />
* 4) comments and reflections as you work through the homework<br />
* 5) notes/title for your short musical statement (Part 5)<br />
* 6) submit ONLY your webpage URL to Canvas</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=220a-spring-2021/hw2&diff=22962220a-spring-2021/hw22021-04-17T19:10:39Z<p>Mulshine: /* Part 2: Sporks, Shreds, and a Sea of Sound */</p>
<hr />
<div>= Homework #2: Block-Rockin' Synths =<br />
<br />
[[File:SynthMadness.jpg|512px|a hand-drawn sketch of a unsettling scene of music synthesis]]<br />
<br />
=== Due Date ===<br />
* milestone: 2021.4.21 (in-class) Wednesday<br />
* final deliverables due: 2021.4.26, 11:59pm, Monday<br />
* in-class listening: 2021.4.28, Wednesday<br />
<br />
<br />
=== Part 1: Crafting a Sound ===<br />
* 1a. Create a pitched sound using an oscillator, a filter, and an envelope<br />
** Oscillator: choose among TriOsc, SawOsc, or SqrOsc. (As discussed in class, the SinOsc is not amenable for filtering because sine waves have no overtones; the filter cannot change the "timbre" of the SinOsc.)<br />
** Filter: hook up your oscillator to a LPF (low pass filter) ugen and experiment with setting the filter's cutoff frequency (using the .freq parameter) (again, a LPF will have a more noticeable effect on signals rich in frequencies (e.g., SqrOsc), compared to, say, a sine wave).<br />
** Next, use an ADSR to envelope the signal coming out of the LPF. (see [http://chuck.stanford.edu/doc/examples/basic/adsr.ck adsr.ck])<br />
*** Experiment with envelope parameters (attack/decay and sustain level) to create a shorter ([https://en.wikipedia.org/wiki/Staccato staccato]) "percussive" sound<br />
*** Experiment with envelope parameters (attack/decay and sustain level) to create a longer ([https://en.wikipedia.org/wiki/Legato legato]) sound (e.g., a swell)<br />
(1a-voice.ck) Turn in ChucK code that plays the shorter sound followed by the longer sound.<br />
<br />
* 1b. Create a “playNote()” function to encapsulate the function of playing a note<br />
** Adapt the sample code below, into a [http://chuck.stanford.edu/doc/language/func.html ChucK function] that takes 3 or more input arguments that control the sound created in 1a; parameters should include the oscillator frequency, the amplitude (gain, related to loudness), and the duration of the note; free feel to further modify this function to your liking (e.g., do you also want to control the filter cutoff frequency with each note?).<br />
** Play 4 different kinds of sounds by calling this function with different inputs.<br />
<nowiki>// play a note (assumes "osc" and "e" are globals)<br />
fun void playNote( float pitch, float amp, dur T )<br />
{<br />
// set freq (osc is your oscillator)<br />
pitch => Std.mtof => osc.freq;<br />
// set amplitude<br />
amp => osc.gain;<br />
// open env (e is your envelope)<br />
e.keyOn();<br />
// A through end of S<br />
T-e.releaseTime() => now;<br />
// close env<br />
e.keyOff();<br />
// release<br />
e.releaseTime() => now;<br />
}</nowiki><br />
<br />
(1b-play.ck) Turn in chuck code including the definition of your function and a section that repeatedly calls your function.<br />
<br />
* 1c. Make it polyphonic<br />
** Convert your single oscillator into an array of 4 oscillators<br />
** Consider using a for-loop to connect the oscillators to the rest of the signal path (including filters, envelopes, and dac)<br />
** Write another Chuck function to set each oscillator to a different frequency to create a [https://en.wikipedia.org/wiki/Chord_(music) chord] of your choosing<br />
<br />
(1c-chord.ck) Turn in chuck code that uses all the above to play a single chord of your choosing.<br />
<br />
By the way, here is some starter code showing one technique for representing chords:<br />
<br />
// chord root in MIDI note number; 60 is Middle C<br />
60 => int root;<br />
// array of intervals relative to the root; this is a major seventh chord with a "just" major third<br />
[0.,3.863,7.,11.] @=> float chord[];<br />
<br />
// print out the MIDI note numbers and frequencies for the chord<br />
for( int i; i < chord.size(); i++ )<br />
{<br />
// print MIDI note, frequency<br />
<<< root+chord[i], Std.mtof(root+chord[i]) >>>;<br />
}<br />
<br />
* 1d. Design a new function "playChord()" -- like 1b but to play an entire chord instead of single note.<br />
** How would your playChord() look? What parameters should it accept? (e.g., root of the chord and intervals?)<br />
** However you design the function's interface, the function should set the respective frequencies on your oscillators, and sound the chord.<br />
(1d-chords.ck) Use the above function to play a sequence of four different chords of your choosing. Additionally, vary at least one other musical element from chord to chord (e.g., relative loudness, duration).<br />
<br />
* 1e. Make 2 or 3 changes to further fine-tune your instrument to your liking. Possible modifications include:<br />
** Change the oscillator type to something you haven’t used before<br />
** Modulate the filter cutoff independently (by spork ~ a concurrent function), perhaps sweeping it using Math.sin()<br />
** Re-tune the chord(s) using floating point (rather than integer) MIDI note numbers<br />
** Add reverb (NRev, PCRev, JRev), a delay w/ feedback, or another effect<br />
** Or something else!<br />
(1e-more.ck) A more-to-your-liking version of 1d-chords.ck incorporating the fine-tunings you've made (use your original chord sequence or feel free to change it up)<br />
<br />
* 1f. (1f-chord-stmt.ck) Craft a mini musical statement (30-45 seconds) by calling your control function multiple times across time, with different input parameters. Feel free to experiment; how might you make it sound more "musical" to your ears?<br />
<br />
=== Part 2: Sporks, Shreds, and a Sea of Sound ===<br />
In the previous Part, we constructed and then controlled sounds with several oscillators in the main thread of ChucK. Alternatively, we could write functions that create new UGens "on-the-fly" to play dynamically when sporked; these functions can be called repeatedly to layer the sounds in order to create polyphony (a mixture of several simultaneous voices).<br />
<br />
* 2a. Write a function makeSound() that makes a sound, encapsulating all the UGens and variables it would need<br />
This function can use any combination of oscillators, filters, and envelopes; you'll need to develop a clear idea of what UGens you need to create "locally" inside the function for each sound -- and what UGens are "globally" shared. Call this function (using spork) from your main while loop. <br />
Note: this could be as simple as wrapping the code you wrote in part 1a in its own function.<br />
<br />
Feel free to start from the code and modify:<br />
<br />
<nowiki> // globally shared ugens<br />
NRev reverb => dac;<br />
.1 => reverb.mix;<br />
<br />
// function<br />
fun void makeSound()<br />
{<br />
// ugens "local" to the function<br />
TriOsc s;<br />
// connect to "global" ugens<br />
s => reverb;<br />
<br />
// randomize frequency<br />
Math.random2f(30,1000) => s.freq;<br />
// randomize duration<br />
Math.random2f(50,1500)::ms => now;<br />
}<br />
<br />
while( true )<br />
{<br />
// spork a new concurrent shred<br />
spork ~ makeSound();<br />
// advance time<br />
300::ms => now;<br />
}</nowiki><br />
<br />
* 2b. Parameterize the makeSound() function so we can control it! Include:<br />
** oscillator pitch<br />
** oscillator amplitude / note velocity<br />
** filter cutoff<br />
** envelope parameters<br />
<br />
for example (your parameters can vary): <br />
<br />
fun void makeSound( float pitch, float vel, float cutoff, dur attack, dur decay, float sustain, dur release )<br />
{<br />
// set the parameters<br />
// make sound happen<br />
// FYI: no infinite loops in this function; we will be calling this function repeately<br />
}<br />
<br />
<br />
* 2c. Now, spork the function several times (back to back without passing time) from your program, something like:<br />
<br />
spork ~ makeSound( 60, .5, 500, 50::ms, 50::ms, .5, 100::ms );<br />
spork ~ makeSound( 64, .5, 500, 50::ms, 50::ms, .5, 100::ms ); <br />
spork ~ makeSound( 67, .5, 500, 50::ms, 50::ms, .5, 100::ms ); <br />
<br />
Experiment with different input parameters, considering the sound as whole; try to craft a few different sounds.<br />
<br />
* 2d. Create a texture by sporking a series of makeSound() shreds across time that partially overlap. What kind of sounds work well when layered? Can you control the density by varying either the time between spork ~ makeSound() and the parameters to each makeSound() call -- or both? You might even consider having a global float variable named "density" that controls the density of sound at any given moment -- you could even modulate the "density" on yet another control shred. Can you controllably go from a sparse texture to a super-dense "sea of sound"?<br />
<br />
* 2e. Play with your functions and parameters to create one sparse (sparse.wav) texture and one (dense.wav) dense texture, each lasting for 10-15 seconds and then going to silence in a musically graceful way (e.g., finishing each note currently sounding). Record a wav file of each and comment on the different strategies you used to create them.<br />
<br />
=== Part 3: Make a Statement ===<br />
Create a musical statement (60-90 seconds) that calls functions from part 1 and 2. Include at least one long and one short sound from part 1, as well as a moment of sparsity and density as explored in part 2. You can think of this as a sequencer or generative music tool.<br />
<br />
You may find the following creative prompts helpful:<br />
* How can you transition between sparsity and density? Would you like it to be abrupt (perhaps to create contrast) or gradual, smooth, or imperceptible? <br />
* How do textures created with short sounds differ from those with long sounds? How about layering textures of differing density? <br />
* How can you vary filter parameters over time to give your sounds/textures a different feel?<br />
* Think of rhythm and timing - to what degree are the time intervals between your sounds regular and predictable? Does this vary over time or for different kinds of sounds? Are some sounds “structural” and others “decorative”? <br />
* Think of form - does your statement have a beginning, middle, and/or end? Is there a “story” or an idea that develops? Could it work with half the duration? What makes the listener want to know or be able to anticipate what comes next?<br />
<br />
=== Milestone ===<br />
* For this milestone, we are primarily interested in a work-in-progress version of your musical statement (Part 3)—and that will be all you are expected to have on your website at this point. However, feel free to include anything on your webpage from Parts 1 and 2 if that's helpful to talk about your explorations and thinking for this milestone.<br />
* Please be prepared to share your work-in-progress and offer feedback to others in class on Wednesday (4/21)<br />
<br />
<br />
=== Final Homework Deliverables ===<br />
'''turn in all files by putting them in your 220a CCRMA webpage and submit ONLY your webpage URL to Canvas'''<br />
<br />
Your webpage should include:<br />
* 1) your hw2 should live at https://ccrma.stanford.edu/~YOURID/220a/hw2<br />
* 2) ChucK (.ck) files, as applicable, for Parts 1 through 5<br />
* 3) sound (.wav) files, as applicable, for Parts 1 through 5<br />
* 4) comments and reflections as you work through the homework<br />
* 5) notes/title for you mursical statement (Part 5)<br />
* 6) submit ONLY your webpage URL to Canvas</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=22872Mulshine:320C2021-04-13T00:38:22Z<p>Mulshine: /* Week 2 */</p>
<hr />
<div>=Music 320C: Software Projects in Music/Audio Signal Processing=<br />
==My Ever-Evolving 320C Project Idea==<br />
<br />
I want to make a quirky, visually elaborate, "all-in-one," "one-stop" Vocal FX plugin for singer-songwriters, rappers, and other vocalists. <br />
<br />
<br />
I want the UI to feature many bright colors, shapes, aesthetically/socially meaningful artifacts, and require unconventional (but fun and intuitive) means of interaction to control the Vocal FX chain. Ideally there will be no numbers, sliders, knobs, meters, or other traditional plugin UI elements in my plugin. This is inspired by my interests in developing unique new audiovisual interactive interfaces for music consumption/distribution, being a singer-songwriter myself, and needing to often work on vocal production. Check out [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5] for some modal and aesthetic reference. <br />
<br />
<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I often make web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and a few spectral delay effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter. In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more.<br />
<br />
===Week 2===<br />
<br />
I got reacquainted with the JUCE plugin development toolchain: updated JUCE, downloaded the new Projucer, built some plugins in XCode, reminded myself how to debug plugins in my DAW via XCode, and more. <br />
<br />
<br />
I was recently interested in developing a de-esser plugin. I did some research on how de-essers work. [https://music.tutsplus.com/articles/how-to-create-a-de-esser-from-scratch-in-logic-or-any-daw-for-that-matter--audio-3793 This article] laid out how a de-esser might function in a very intuitive way, explaining the algorithm in terms of traditional plugins and fx chains. This is the algorithm they outline:<br />
# '''Apply parametric EQ (peak filter)''' to input signal with frequency between '''5-8 kHz''' and a '''very high Q (like 100+)'''. This will produce a signal that peaks really aggressively on sibilant "s" sounds, since "s" frequency content is packed in to the 5-8kHz range. <br />
# '''Compress original input signal with sibilant filtered signal as sidechain input.''' Because the sidechain input peaks aggressively on "s" sounds, the compressor will compress the original input only during those "s" sounds. The envelope of the compressor needs to be really quick to cut out "s" sounds as soon as they happen, and before they are perceptible. In experiments, I had to set the compressor's attack to approximately 0.15ms and release to 15-20ms to get smooth, immediately responsive compression. <br />
<br />
<br />
I'm not trying to build a whole plugin UI completely from scratch at this stage. Looking around for code to work from, I was lucky enough to find an [https://github.com/p-hlp/CTAGDRC open source dynamic range compressor JUCE plugin] by Creative Technologies. I ported a state-variable filter implementation (based on STK) from a previous project ([https://github.com/spiricom/LEAF/ LEAF] into this plugin and was able to get the de-esser algorithm above to work, along with a couple extra knobs and buttons in the UI. <br />
<br />
<br />
This side project in to de-essing will be useful in developing a collection of vocal FX for my 320C final project plugin.<br />
<br />
===Week 3===<br />
<br />
====Goals====<br />
# Explore animation (demos/tutorials) with JUCE. 2D and 3D<br />
# Map some mouse and keystroke interactions to animation and audio processing parameters in an arbitrary (for now) FX chain<br />
# Test how these basic plugins work in various DAWs... I know different DAWs handle different keystrokes and mouse events differently.</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=22871Mulshine:320C2021-04-13T00:37:51Z<p>Mulshine: /* Week 2 */</p>
<hr />
<div>=Music 320C: Software Projects in Music/Audio Signal Processing=<br />
==My Ever-Evolving 320C Project Idea==<br />
<br />
I want to make a quirky, visually elaborate, "all-in-one," "one-stop" Vocal FX plugin for singer-songwriters, rappers, and other vocalists. <br />
<br />
<br />
I want the UI to feature many bright colors, shapes, aesthetically/socially meaningful artifacts, and require unconventional (but fun and intuitive) means of interaction to control the Vocal FX chain. Ideally there will be no numbers, sliders, knobs, meters, or other traditional plugin UI elements in my plugin. This is inspired by my interests in developing unique new audiovisual interactive interfaces for music consumption/distribution, being a singer-songwriter myself, and needing to often work on vocal production. Check out [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5] for some modal and aesthetic reference. <br />
<br />
<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I often make web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and a few spectral delay effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter. In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more.<br />
<br />
===Week 2===<br />
<br />
I got reacquainted with the JUCE plugin development toolchain: updated JUCE, downloaded the new Projucer, built some plugins in XCode, reminded myself how to debug plugins in my DAW via XCode, and more. <br />
<br />
<br />
I was recently interested in developing a de-esser plugin. I did some research on how de-essers work. [https://music.tutsplus.com/articles/how-to-create-a-de-esser-from-scratch-in-logic-or-any-daw-for-that-matter--audio-3793 This article] laid out how a de-esser might function in a very intuitive way, explaining the algorithm in terms of traditional plugins and fx chains. This is the algorithm they outline:<br />
# '''Apply parametric EQ (peak filter)''' to input signal with frequency between '''5-8 kHz''' and a '''very high Q (like 100+)'''. This will produce a signal that peaks really aggressively on sibilant "s" sounds, since "s" frequency content is packed in to the 5-8kHz range. <br />
# '''Compress original input signal with sibilant filtered signal as sidechain input.''' Because the sidechain input peaks aggressively on "s" sounds, the compressor will compress the original input only during those "s" sounds. The envelope of the compressor needs to be really quick to cut out "s" sounds as soon as they happen, and before they are perceptible. In experiments, I had to set the compressor's attack to approximately 0.15ms and release to 15-20ms to get smooth and responsive-enough compression. <br />
<br />
<br />
I'm not trying to build a whole plugin UI completely from scratch at this stage. Looking around for code to work from, I was lucky enough to find an [https://github.com/p-hlp/CTAGDRC open source dynamic range compressor JUCE plugin] by Creative Technologies. I ported a state-variable filter implementation (based on STK) from a previous project ([https://github.com/spiricom/LEAF/ LEAF] into this plugin and was able to get the de-esser algorithm above to work, along with a couple extra knobs and buttons in the UI. <br />
<br />
<br />
This side project in to de-essing will be useful in developing a collection of vocal FX for my 320C final project plugin.<br />
<br />
===Week 3===<br />
<br />
====Goals====<br />
# Explore animation (demos/tutorials) with JUCE. 2D and 3D<br />
# Map some mouse and keystroke interactions to animation and audio processing parameters in an arbitrary (for now) FX chain<br />
# Test how these basic plugins work in various DAWs... I know different DAWs handle different keystrokes and mouse events differently.</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=22869Mulshine:320C2021-04-13T00:34:07Z<p>Mulshine: /* Week 1 */</p>
<hr />
<div>=Music 320C: Software Projects in Music/Audio Signal Processing=<br />
==My Ever-Evolving 320C Project Idea==<br />
<br />
I want to make a quirky, visually elaborate, "all-in-one," "one-stop" Vocal FX plugin for singer-songwriters, rappers, and other vocalists. <br />
<br />
<br />
I want the UI to feature many bright colors, shapes, aesthetically/socially meaningful artifacts, and require unconventional (but fun and intuitive) means of interaction to control the Vocal FX chain. Ideally there will be no numbers, sliders, knobs, meters, or other traditional plugin UI elements in my plugin. This is inspired by my interests in developing unique new audiovisual interactive interfaces for music consumption/distribution, being a singer-songwriter myself, and needing to often work on vocal production. Check out [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5] for some modal and aesthetic reference. <br />
<br />
<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I often make web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and a few spectral delay effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter. In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more.<br />
<br />
===Week 2===<br />
<br />
I got reacquainted with the JUCE plugin development toolchain: updated JUCE, downloaded the new Projucer, built some plugins in XCode, reminded myself how to debug plugins in my DAW via XCode, and more. <br />
<br />
<br />
I was recently interested in developing a de-esser plugin. I did some research on how de-essers work. [https://music.tutsplus.com/articles/how-to-create-a-de-esser-from-scratch-in-logic-or-any-daw-for-that-matter--audio-3793 This article] laid out how a de-esser might function in a very intuitive way, explaining the algorithm in terms of traditional plugins and fx chains. This is the algorithm they outline:<br />
# '''Apply parametric EQ (peak filter)''' to input signal with frequency between '''5-8 kHz''' and a '''very high Q (like 100+)'''. This will produce a signal that peaks really aggressively on sibilant "s" sounds, since "s" frequency content is packed in to the 5-8kHz range. <br />
# '''Compress original input signal with sibilant filtered signal as sidechain input.''' Because the sidechain input peaks aggressively on "s" sounds, the compressor will compress the original input only during those "s" sounds. The envelope of the compressor needs to be really quick to cut out "s" sounds as soon as they happen, and before they are perceptible. In experiments, I had to set the compressor's attack to approximately 0.15ms and release to 15-20ms to get smooth but effective compression. <br />
<br />
<br />
I'm not trying to build a whole plugin UI completely from scratch at this stage. Looking around for code to work from, I was lucky enough to find an [https://github.com/p-hlp/CTAGDRC open source dynamic range compressor JUCE plugin] by Creative Technologies. I ported a state-variable filter implementation (based on STK) from a previous project ([https://github.com/spiricom/LEAF/ LEAF] into this plugin and was able to get the de-esser algorithm above to work, along with a couple extra knobs and buttons in the UI. <br />
<br />
<br />
This side project in to de-essing will be useful in developing a collection of vocal FX for my 320C final project plugin.<br />
<br />
===Week 3===<br />
<br />
====Goals====<br />
# Explore animation (demos/tutorials) with JUCE. 2D and 3D<br />
# Map some mouse and keystroke interactions to animation and audio processing parameters in an arbitrary (for now) FX chain<br />
# Test how these basic plugins work in various DAWs... I know different DAWs handle different keystrokes and mouse events differently.</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=22868Mulshine:320C2021-04-13T00:10:32Z<p>Mulshine: /* Week 2 */</p>
<hr />
<div>=Music 320C: Software Projects in Music/Audio Signal Processing=<br />
==My Ever-Evolving 320C Project Idea==<br />
<br />
I want to make a quirky, visually elaborate, "all-in-one," "one-stop" Vocal FX plugin for singer-songwriters, rappers, and other vocalists. <br />
<br />
<br />
I want the UI to feature many bright colors, shapes, aesthetically/socially meaningful artifacts, and require unconventional (but fun and intuitive) means of interaction to control the Vocal FX chain. Ideally there will be no numbers, sliders, knobs, meters, or other traditional plugin UI elements in my plugin. This is inspired by my interests in developing unique new audiovisual interactive interfaces for music consumption/distribution, being a singer-songwriter myself, and needing to often work on vocal production. Check out [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5] for some modal and aesthetic reference. <br />
<br />
<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I often make web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and some a few spectral delay vocal effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter . In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more.<br />
<br />
===Week 2===<br />
<br />
I got reacquainted with the JUCE plugin development toolchain: updated JUCE, downloaded the new Projucer, built some plugins in XCode, reminded myself how to debug plugins in my DAW via XCode, and more. <br />
<br />
<br />
I was recently interested in developing a de-esser plugin. I did some research on how de-essers work. [https://music.tutsplus.com/articles/how-to-create-a-de-esser-from-scratch-in-logic-or-any-daw-for-that-matter--audio-3793 This article] laid out how a de-esser might function in a very intuitive way, explaining the algorithm in terms of traditional plugins and fx chains. This is the algorithm they outline:<br />
# '''Apply parametric EQ (peak filter)''' to input signal with frequency between '''5-8 kHz''' and a '''very high Q (like 100+)'''. This will produce a signal that peaks really aggressively on sibilant "s" sounds, since "s" frequency content is packed in to the 5-8kHz range. <br />
# '''Compress original input signal with sibilant filtered signal as sidechain input.''' Because the sidechain input peaks aggressively on "s" sounds, the compressor will compress the original input only during those "s" sounds. The envelope of the compressor needs to be really quick to cut out "s" sounds as soon as they happen, and before they are perceptible. In experiments, I had to set the compressor's attack to approximately 0.15ms and release to 15-20ms to get smooth but effective compression. <br />
<br />
<br />
I'm not trying to build a whole plugin UI completely from scratch at this stage. Looking around for code to work from, I was lucky enough to find an [https://github.com/p-hlp/CTAGDRC open source dynamic range compressor JUCE plugin] by Creative Technologies. I ported a state-variable filter implementation (based on STK) from a previous project ([https://github.com/spiricom/LEAF/ LEAF] into this plugin and was able to get the de-esser algorithm above to work, along with a couple extra knobs and buttons in the UI. <br />
<br />
<br />
This side project in to de-essing will be useful in developing a collection of vocal FX for my 320C final project plugin.<br />
<br />
===Week 3===<br />
<br />
====Goals====<br />
# Explore animation (demos/tutorials) with JUCE. 2D and 3D<br />
# Map some mouse and keystroke interactions to animation and audio processing parameters in an arbitrary (for now) FX chain<br />
# Test how these basic plugins work in various DAWs... I know different DAWs handle different keystrokes and mouse events differently.</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=22867Mulshine:320C2021-04-13T00:10:18Z<p>Mulshine: /* Week 1 */</p>
<hr />
<div>=Music 320C: Software Projects in Music/Audio Signal Processing=<br />
==My Ever-Evolving 320C Project Idea==<br />
<br />
I want to make a quirky, visually elaborate, "all-in-one," "one-stop" Vocal FX plugin for singer-songwriters, rappers, and other vocalists. <br />
<br />
<br />
I want the UI to feature many bright colors, shapes, aesthetically/socially meaningful artifacts, and require unconventional (but fun and intuitive) means of interaction to control the Vocal FX chain. Ideally there will be no numbers, sliders, knobs, meters, or other traditional plugin UI elements in my plugin. This is inspired by my interests in developing unique new audiovisual interactive interfaces for music consumption/distribution, being a singer-songwriter myself, and needing to often work on vocal production. Check out [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5] for some modal and aesthetic reference. <br />
<br />
<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I often make web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and some a few spectral delay vocal effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter . In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more.<br />
<br />
===Week 2===<br />
<br />
I got reacquainted with the JUCE plugin development toolchain: updated JUCE, downloaded the new Projucer, built some plugins in XCode, reminded myself how to debug plugins in my DAW via XCode, and more. <br />
<br />
<br />
I was recently interested in developing a de-esser plugin. I did some research on how de-essers work. [https://music.tutsplus.com/articles/how-to-create-a-de-esser-from-scratch-in-logic-or-any-daw-for-that-matter--audio-3793 This article] laid out how a de-esser might function in a very intuitive way, explaining the algorithm in terms of traditional plugins and fx chains. This is the algorithm they outline:<br />
# '''Apply parametric EQ (peak filter)''' to input signal with frequency between '''5-8 kHz''' and a '''very high Q (like 100+)'''. This will produce a signal that peaks really aggressively on sibilant "s" sounds, since "s" frequency content is packed in to the 5-8kHz range. <br />
# '''Compressor original input signal with sibilant filtered signal as sidechain input.''' Because the sidechain input peaks aggressively on "s" sounds, the compressor will compress the original input only during those "s" sounds. The envelope of the compressor needs to be really quick to cut out "s" sounds as soon as they happen, and before they are perceptible. In experiments, I had to set the compressor's attack to approximately 0.15ms and release to 15-20ms to get smooth but effective compression. <br />
<br />
<br />
I'm not trying to build a whole plugin UI completely from scratch at this stage. Looking around for code to work from, I was lucky enough to find an [https://github.com/p-hlp/CTAGDRC open source dynamic range compressor JUCE plugin] by Creative Technologies. I ported a state-variable filter implementation (based on STK) from a previous project ([https://github.com/spiricom/LEAF/ LEAF] into this plugin and was able to get the de-esser algorithm above to work, along with a couple extra knobs and buttons in the UI. <br />
<br />
<br />
This side project in to de-essing will be useful in developing a collection of vocal FX for my 320C final project plugin.<br />
<br />
===Week 3===<br />
<br />
====Goals====<br />
# Explore animation (demos/tutorials) with JUCE. 2D and 3D<br />
# Map some mouse and keystroke interactions to animation and audio processing parameters in an arbitrary (for now) FX chain<br />
# Test how these basic plugins work in various DAWs... I know different DAWs handle different keystrokes and mouse events differently.</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=22862Mulshine:320C2021-04-12T22:38:41Z<p>Mulshine: /* Simple De-Esser Algo */</p>
<hr />
<div>=Music 320C: Software Projects in Music/Audio Signal Processing=<br />
==My Ever-Evolving 320C Project Idea==<br />
<br />
I want to make a quirky, visually elaborate, "all-in-one," "one-stop" Vocal FX plugin for singer-songwriters, rappers, and other vocalists. <br />
<br />
<br />
I want the UI to feature many bright colors, shapes, aesthetically/socially meaningful artifacts, and require unconventional (but fun and intuitive) means of interaction to control the Vocal FX chain. Ideally there will be no numbers, sliders, knobs, meters, or other traditional plugin UI elements in my plugin. This is inspired by my interests in developing unique new audiovisual interactive interfaces for music consumption/distribution, being a singer-songwriter myself, and needing to often work on vocal production. Check out [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5] for some modal and aesthetic reference. <br />
<br />
<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I tend to web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and some a few spectral delay vocal effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter . In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more. <br />
<br />
<br />
<br />
===Week 2===<br />
<br />
I got reacquainted with the JUCE plugin development toolchain: updated JUCE, downloaded the new Projucer, built some plugins in XCode, reminded myself how to debug plugins in my DAW via XCode, and more. <br />
<br />
<br />
I was recently interested in developing a de-esser plugin. I did some research on how de-essers work. [https://music.tutsplus.com/articles/how-to-create-a-de-esser-from-scratch-in-logic-or-any-daw-for-that-matter--audio-3793 This article] laid out how a de-esser might function in a very intuitive way, explaining the algorithm in terms of traditional plugins and fx chains. This is the algorithm they outline:<br />
# '''Apply parametric EQ (peak filter)''' to input signal with frequency between '''5-8 kHz''' and a '''very high Q (like 100+)'''. This will produce a signal that peaks really aggressively on sibilant "s" sounds, since "s" frequency content is packed in to the 5-8kHz range. <br />
# '''Compressor original input signal with sibilant filtered signal as sidechain input.''' Because the sidechain input peaks aggressively on "s" sounds, the compressor will compress the original input only during those "s" sounds. The envelope of the compressor needs to be really quick to cut out "s" sounds as soon as they happen, and before they are perceptible. In experiments, I had to set the compressor's attack to approximately 0.15ms and release to 15-20ms to get smooth but effective compression. <br />
<br />
<br />
I'm not trying to build a whole plugin UI completely from scratch at this stage. Looking around for code to work from, I was lucky enough to find an [https://github.com/p-hlp/CTAGDRC open source dynamic range compressor JUCE plugin] by Creative Technologies. I ported a state-variable filter implementation (based on STK) from a previous project ([https://github.com/spiricom/LEAF/ LEAF] into this plugin and was able to get the de-esser algorithm above to work, along with a couple extra knobs and buttons in the UI. <br />
<br />
<br />
This side project in to de-essing will be useful in developing a collection of vocal FX for my 320C final project plugin.<br />
<br />
===Week 3===<br />
<br />
====Goals====<br />
# Explore animation (demos/tutorials) with JUCE. 2D and 3D<br />
# Map some mouse and keystroke interactions to animation and audio processing parameters in an arbitrary (for now) FX chain<br />
# Test how these basic plugins work in various DAWs... I know different DAWs handle different keystrokes and mouse events differently.</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=22861Mulshine:320C2021-04-12T22:38:29Z<p>Mulshine: /* Simple De-Esser Algo */</p>
<hr />
<div>=Music 320C: Software Projects in Music/Audio Signal Processing=<br />
==My Ever-Evolving 320C Project Idea==<br />
<br />
I want to make a quirky, visually elaborate, "all-in-one," "one-stop" Vocal FX plugin for singer-songwriters, rappers, and other vocalists. <br />
<br />
<br />
I want the UI to feature many bright colors, shapes, aesthetically/socially meaningful artifacts, and require unconventional (but fun and intuitive) means of interaction to control the Vocal FX chain. Ideally there will be no numbers, sliders, knobs, meters, or other traditional plugin UI elements in my plugin. This is inspired by my interests in developing unique new audiovisual interactive interfaces for music consumption/distribution, being a singer-songwriter myself, and needing to often work on vocal production. Check out [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5] for some modal and aesthetic reference. <br />
<br />
<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I tend to web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and some a few spectral delay vocal effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter . In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more. <br />
<br />
<br />
<br />
===Week 2===<br />
<br />
I got reacquainted with the JUCE plugin development toolchain: updated JUCE, downloaded the new Projucer, built some plugins in XCode, reminded myself how to debug plugins in my DAW via XCode, and more. <br />
<br />
<br />
I was recently interested in developing a de-esser plugin. I did some research on how de-essers work. [https://music.tutsplus.com/articles/how-to-create-a-de-esser-from-scratch-in-logic-or-any-daw-for-that-matter--audio-3793 This article] laid out how a de-esser might function in a very intuitive way, explaining the algorithm in terms of traditional plugins and fx chains. This is the algorithm they outline:<br />
=====Simple De-Esser Algo=====<br />
# '''Apply parametric EQ (peak filter)''' to input signal with frequency between '''5-8 kHz''' and a '''very high Q (like 100+)'''. This will produce a signal that peaks really aggressively on sibilant "s" sounds, since "s" frequency content is packed in to the 5-8kHz range. <br />
# '''Compressor original input signal with sibilant filtered signal as sidechain input.''' Because the sidechain input peaks aggressively on "s" sounds, the compressor will compress the original input only during those "s" sounds. The envelope of the compressor needs to be really quick to cut out "s" sounds as soon as they happen, and before they are perceptible. In experiments, I had to set the compressor's attack to approximately 0.15ms and release to 15-20ms to get smooth but effective compression. <br />
<br />
<br />
I'm not trying to build a whole plugin UI completely from scratch at this stage. Looking around for code to work from, I was lucky enough to find an [https://github.com/p-hlp/CTAGDRC open source dynamic range compressor JUCE plugin] by Creative Technologies. I ported a state-variable filter implementation (based on STK) from a previous project ([https://github.com/spiricom/LEAF/ LEAF] into this plugin and was able to get the de-esser algorithm above to work, along with a couple extra knobs and buttons in the UI. <br />
<br />
<br />
This side project in to de-essing will be useful in developing a collection of vocal FX for my 320C final project plugin.<br />
<br />
===Week 3===<br />
<br />
====Goals====<br />
# Explore animation (demos/tutorials) with JUCE. 2D and 3D<br />
# Map some mouse and keystroke interactions to animation and audio processing parameters in an arbitrary (for now) FX chain<br />
# Test how these basic plugins work in various DAWs... I know different DAWs handle different keystrokes and mouse events differently.</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=22860Mulshine:320C2021-04-12T22:38:20Z<p>Mulshine: /* Week 2 */</p>
<hr />
<div>=Music 320C: Software Projects in Music/Audio Signal Processing=<br />
==My Ever-Evolving 320C Project Idea==<br />
<br />
I want to make a quirky, visually elaborate, "all-in-one," "one-stop" Vocal FX plugin for singer-songwriters, rappers, and other vocalists. <br />
<br />
<br />
I want the UI to feature many bright colors, shapes, aesthetically/socially meaningful artifacts, and require unconventional (but fun and intuitive) means of interaction to control the Vocal FX chain. Ideally there will be no numbers, sliders, knobs, meters, or other traditional plugin UI elements in my plugin. This is inspired by my interests in developing unique new audiovisual interactive interfaces for music consumption/distribution, being a singer-songwriter myself, and needing to often work on vocal production. Check out [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5] for some modal and aesthetic reference. <br />
<br />
<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I tend to web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and some a few spectral delay vocal effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter . In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more. <br />
<br />
<br />
<br />
===Week 2===<br />
<br />
I got reacquainted with the JUCE plugin development toolchain: updated JUCE, downloaded the new Projucer, built some plugins in XCode, reminded myself how to debug plugins in my DAW via XCode, and more. <br />
<br />
<br />
I was recently interested in developing a de-esser plugin. I did some research on how de-essers work. [https://music.tutsplus.com/articles/how-to-create-a-de-esser-from-scratch-in-logic-or-any-daw-for-that-matter--audio-3793 This article] laid out how a de-esser might function in a very intuitive way, explaining the algorithm in terms of traditional plugins and fx chains. This is the algorithm they outline:<br />
====Simple De-Esser Algo====<br />
# '''Apply parametric EQ (peak filter)''' to input signal with frequency between '''5-8 kHz''' and a '''very high Q (like 100+)'''. This will produce a signal that peaks really aggressively on sibilant "s" sounds, since "s" frequency content is packed in to the 5-8kHz range. <br />
# '''Compressor original input signal with sibilant filtered signal as sidechain input.''' Because the sidechain input peaks aggressively on "s" sounds, the compressor will compress the original input only during those "s" sounds. The envelope of the compressor needs to be really quick to cut out "s" sounds as soon as they happen, and before they are perceptible. In experiments, I had to set the compressor's attack to approximately 0.15ms and release to 15-20ms to get smooth but effective compression. <br />
<br />
<br />
I'm not trying to build a whole plugin UI completely from scratch at this stage. Looking around for code to work from, I was lucky enough to find an [https://github.com/p-hlp/CTAGDRC open source dynamic range compressor JUCE plugin] by Creative Technologies. I ported a state-variable filter implementation (based on STK) from a previous project ([https://github.com/spiricom/LEAF/ LEAF] into this plugin and was able to get the de-esser algorithm above to work, along with a couple extra knobs and buttons in the UI. <br />
<br />
<br />
This side project in to de-essing will be useful in developing a collection of vocal FX for my 320C final project plugin.<br />
<br />
===Week 3===<br />
<br />
====Goals====<br />
# Explore animation (demos/tutorials) with JUCE. 2D and 3D<br />
# Map some mouse and keystroke interactions to animation and audio processing parameters in an arbitrary (for now) FX chain<br />
# Test how these basic plugins work in various DAWs... I know different DAWs handle different keystrokes and mouse events differently.</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=22859Mulshine:320C2021-04-12T22:37:59Z<p>Mulshine: /* Week 3 */</p>
<hr />
<div>=Music 320C: Software Projects in Music/Audio Signal Processing=<br />
==My Ever-Evolving 320C Project Idea==<br />
<br />
I want to make a quirky, visually elaborate, "all-in-one," "one-stop" Vocal FX plugin for singer-songwriters, rappers, and other vocalists. <br />
<br />
<br />
I want the UI to feature many bright colors, shapes, aesthetically/socially meaningful artifacts, and require unconventional (but fun and intuitive) means of interaction to control the Vocal FX chain. Ideally there will be no numbers, sliders, knobs, meters, or other traditional plugin UI elements in my plugin. This is inspired by my interests in developing unique new audiovisual interactive interfaces for music consumption/distribution, being a singer-songwriter myself, and needing to often work on vocal production. Check out [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5] for some modal and aesthetic reference. <br />
<br />
<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I tend to web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and some a few spectral delay vocal effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter . In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more. <br />
<br />
<br />
<br />
===Week 2===<br />
<br />
I got reacquainted with the JUCE plugin development toolchain: updated JUCE, downloaded the new Projucer, built some plugins in XCode, reminded myself how to debug plugins in my DAW via XCode, and more. <br />
<br />
<br />
I was recently interested in developing a de-esser plugin. I did some research on how de-essers work. [https://music.tutsplus.com/articles/how-to-create-a-de-esser-from-scratch-in-logic-or-any-daw-for-that-matter--audio-3793 This article] laid out how a de-esser might function in a very intuitive way, explaining the algorithm in terms of traditional plugins and fx chains. This is the algorithm they outline:<br />
# '''Apply parametric EQ (peak filter)''' to input signal with frequency between '''5-8 kHz''' and a '''very high Q (like 100+)'''. This will produce a signal that peaks really aggressively on sibilant "s" sounds, since "s" frequency content is packed in to the 5-8kHz range. <br />
# '''Compressor original input signal with sibilant filtered signal as sidechain input.''' Because the sidechain input peaks aggressively on "s" sounds, the compressor will compress the original input only during those "s" sounds. The envelope of the compressor needs to be really quick to cut out "s" sounds as soon as they happen, and before they are perceptible. In experiments, I had to set the compressor's attack to approximately 0.15ms and release to 15-20ms to get smooth but effective compression. <br />
<br />
<br />
I'm not trying to build a whole plugin UI completely from scratch at this stage. Looking around for code to work from, I was lucky enough to find an [https://github.com/p-hlp/CTAGDRC open source dynamic range compressor JUCE plugin] by Creative Technologies. I ported a state-variable filter implementation (based on STK) from a previous project ([https://github.com/spiricom/LEAF/ LEAF] into this plugin and was able to get the de-esser algorithm above to work, along with a couple extra knobs and buttons in the UI. <br />
<br />
<br />
This side project in to de-essing will be useful in developing a collection of vocal FX for my 320C final project plugin.<br />
<br />
===Week 3===<br />
<br />
====Goals====<br />
# Explore animation (demos/tutorials) with JUCE. 2D and 3D<br />
# Map some mouse and keystroke interactions to animation and audio processing parameters in an arbitrary (for now) FX chain<br />
# Test how these basic plugins work in various DAWs... I know different DAWs handle different keystrokes and mouse events differently.</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=22857Mulshine:320C2021-04-12T22:25:14Z<p>Mulshine: Mulshine moved page 320C to Mulshine:320C: Too general of a link!</p>
<hr />
<div>=Music 320C: Software Projects in Music/Audio Signal Processing=<br />
==My Ever-Evolving 320C Project Idea==<br />
<br />
I want to make a quirky, visually elaborate, "all-in-one," "one-stop" Vocal FX plugin for singer-songwriters, rappers, and other vocalists. <br />
<br />
<br />
I want the UI to feature many bright colors, shapes, aesthetically/socially meaningful artifacts, and require unconventional (but fun and intuitive) means of interaction to control the Vocal FX chain. Ideally there will be no numbers, sliders, knobs, meters, or other traditional plugin UI elements in my plugin. This is inspired by my interests in developing unique new audiovisual interactive interfaces for music consumption/distribution, being a singer-songwriter myself, and needing to often work on vocal production. Check out [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5] for some modal and aesthetic reference. <br />
<br />
<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I tend to web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and some a few spectral delay vocal effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter . In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more. <br />
<br />
<br />
<br />
===Week 2===<br />
<br />
I got reacquainted with the JUCE plugin development toolchain: updated JUCE, downloaded the new Projucer, built some plugins in XCode, reminded myself how to debug plugins in my DAW via XCode, and more. <br />
<br />
<br />
I was recently interested in developing a de-esser plugin. I did some research on how de-essers work. [https://music.tutsplus.com/articles/how-to-create-a-de-esser-from-scratch-in-logic-or-any-daw-for-that-matter--audio-3793 This article] laid out how a de-esser might function in a very intuitive way, explaining the algorithm in terms of traditional plugins and fx chains. This is the algorithm they outline:<br />
# '''Apply parametric EQ (peak filter)''' to input signal with frequency between '''5-8 kHz''' and a '''very high Q (like 100+)'''. This will produce a signal that peaks really aggressively on sibilant "s" sounds, since "s" frequency content is packed in to the 5-8kHz range. <br />
# '''Compressor original input signal with sibilant filtered signal as sidechain input.''' Because the sidechain input peaks aggressively on "s" sounds, the compressor will compress the original input only during those "s" sounds. The envelope of the compressor needs to be really quick to cut out "s" sounds as soon as they happen, and before they are perceptible. In experiments, I had to set the compressor's attack to approximately 0.15ms and release to 15-20ms to get smooth but effective compression. <br />
<br />
<br />
I'm not trying to build a whole plugin UI completely from scratch at this stage. Looking around for code to work from, I was lucky enough to find an [https://github.com/p-hlp/CTAGDRC open source dynamic range compressor JUCE plugin] by Creative Technologies. I ported a state-variable filter implementation (based on STK) from a previous project ([https://github.com/spiricom/LEAF/ LEAF] into this plugin and was able to get the de-esser algorithm above to work, along with a couple extra knobs and buttons in the UI. <br />
<br />
<br />
This side project in to de-essing will be useful in developing a collection of vocal FX for my 320C final project plugin.<br />
<br />
===Week 3===</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=320C&diff=22858320C2021-04-12T22:25:14Z<p>Mulshine: Mulshine moved page 320C to Mulshine:320C: Too general of a link!</p>
<hr />
<div>#REDIRECT [[Mulshine:320C]]</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=22856Mulshine:320C2021-04-12T22:24:19Z<p>Mulshine: /* Music 320C: Software Projects in Music/Audio Signal Processing */</p>
<hr />
<div>=Music 320C: Software Projects in Music/Audio Signal Processing=<br />
==My Ever-Evolving 320C Project Idea==<br />
<br />
I want to make a quirky, visually elaborate, "all-in-one," "one-stop" Vocal FX plugin for singer-songwriters, rappers, and other vocalists. <br />
<br />
<br />
I want the UI to feature many bright colors, shapes, aesthetically/socially meaningful artifacts, and require unconventional (but fun and intuitive) means of interaction to control the Vocal FX chain. Ideally there will be no numbers, sliders, knobs, meters, or other traditional plugin UI elements in my plugin. This is inspired by my interests in developing unique new audiovisual interactive interfaces for music consumption/distribution, being a singer-songwriter myself, and needing to often work on vocal production. Check out [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5] for some modal and aesthetic reference. <br />
<br />
<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I tend to web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and some a few spectral delay vocal effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter . In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more. <br />
<br />
<br />
<br />
===Week 2===<br />
<br />
I got reacquainted with the JUCE plugin development toolchain: updated JUCE, downloaded the new Projucer, built some plugins in XCode, reminded myself how to debug plugins in my DAW via XCode, and more. <br />
<br />
<br />
I was recently interested in developing a de-esser plugin. I did some research on how de-essers work. [https://music.tutsplus.com/articles/how-to-create-a-de-esser-from-scratch-in-logic-or-any-daw-for-that-matter--audio-3793 This article] laid out how a de-esser might function in a very intuitive way, explaining the algorithm in terms of traditional plugins and fx chains. This is the algorithm they outline:<br />
# '''Apply parametric EQ (peak filter)''' to input signal with frequency between '''5-8 kHz''' and a '''very high Q (like 100+)'''. This will produce a signal that peaks really aggressively on sibilant "s" sounds, since "s" frequency content is packed in to the 5-8kHz range. <br />
# '''Compressor original input signal with sibilant filtered signal as sidechain input.''' Because the sidechain input peaks aggressively on "s" sounds, the compressor will compress the original input only during those "s" sounds. The envelope of the compressor needs to be really quick to cut out "s" sounds as soon as they happen, and before they are perceptible. In experiments, I had to set the compressor's attack to approximately 0.15ms and release to 15-20ms to get smooth but effective compression. <br />
<br />
<br />
I'm not trying to build a whole plugin UI completely from scratch at this stage. Looking around for code to work from, I was lucky enough to find an [https://github.com/p-hlp/CTAGDRC open source dynamic range compressor JUCE plugin] by Creative Technologies. I ported a state-variable filter implementation (based on STK) from a previous project ([https://github.com/spiricom/LEAF/ LEAF] into this plugin and was able to get the de-esser algorithm above to work, along with a couple extra knobs and buttons in the UI. <br />
<br />
<br />
This side project in to de-essing will be useful in developing a collection of vocal FX for my 320C final project plugin.<br />
<br />
===Week 3===</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=22855Mulshine:320C2021-04-12T22:20:14Z<p>Mulshine: /* Mike Mulshine */</p>
<hr />
<div>==Music 320C: Software Projects in Music/Audio Signal Processing==<br />
===My Ever-Evolving 320C Project Idea===<br />
<br />
I want to make a quirky, visually elaborate, "all-in-one," "one-stop" Vocal FX plugin for singer-songwriters, rappers, and other vocalists. <br />
<br />
<br />
I want the UI to feature many bright colors, shapes, aesthetically/socially meaningful artifacts, and require unconventional (but fun and intuitive) means of interaction to control the Vocal FX chain. Ideally there will be no numbers, sliders, knobs, meters, or other traditional plugin UI elements in my plugin. This is inspired by my interests in developing unique new audiovisual interactive interfaces for music consumption/distribution, being a singer-songwriter myself, and needing to often work on vocal production. Check out [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5] for some modal and aesthetic reference. <br />
<br />
<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I tend to web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and some a few spectral delay vocal effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter . In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more. <br />
<br />
<br />
<br />
===Week 2===<br />
<br />
I got reacquainted with the JUCE plugin development toolchain: updated JUCE, downloaded the new Projucer, built some plugins in XCode, reminded myself how to debug plugins in my DAW via XCode, and more. <br />
<br />
<br />
I was recently interested in developing a de-esser plugin. I did some research on how de-essers work. [https://music.tutsplus.com/articles/how-to-create-a-de-esser-from-scratch-in-logic-or-any-daw-for-that-matter--audio-3793 This article] laid out how a de-esser might function in a very intuitive way, explaining the algorithm in terms of traditional plugins and fx chains. This is the algorithm they outline:<br />
# '''Apply parametric EQ (peak filter)''' to input signal with frequency between '''5-8 kHz''' and a '''very high Q (like 100+)'''. This will produce a signal that peaks really aggressively on sibilant "s" sounds, since "s" frequency content is packed in to the 5-8kHz range. <br />
# '''Compressor original input signal with sibilant filtered signal as sidechain input.''' Because the sidechain input peaks aggressively on "s" sounds, the compressor will compress the original input only during those "s" sounds. The envelope of the compressor needs to be really quick to cut out "s" sounds as soon as they happen, and before they are perceptible. In experiments, I had to set the compressor's attack to approximately 0.15ms and release to 15-20ms to get smooth but effective compression. <br />
<br />
<br />
I'm not trying to build a whole plugin UI completely from scratch at this stage. Looking around for code to work from, I was lucky enough to find an [https://github.com/p-hlp/CTAGDRC open source dynamic range compressor JUCE plugin] by Creative Technologies. I ported a state-variable filter implementation (based on STK) from a previous project ([https://github.com/spiricom/LEAF/ LEAF] into this plugin and was able to get the de-esser algorithm above to work, along with a couple extra knobs and buttons in the UI. <br />
<br />
<br />
This side project in to de-essing will be useful in developing a collection of vocal FX for my 320C final project plugin.<br />
<br />
===Week 3===</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=22854Mulshine:320C2021-04-12T22:20:03Z<p>Mulshine: /* Music 320C: Software Projects in Music/Audio Signal Processing */</p>
<hr />
<div>==Music 320C: Software Projects in Music/Audio Signal Processing==<br />
===Mike Mulshine===<br />
<br />
<br />
===My Ever-Evolving 320C Project Idea===<br />
<br />
I want to make a quirky, visually elaborate, "all-in-one," "one-stop" Vocal FX plugin for singer-songwriters, rappers, and other vocalists. <br />
<br />
<br />
I want the UI to feature many bright colors, shapes, aesthetically/socially meaningful artifacts, and require unconventional (but fun and intuitive) means of interaction to control the Vocal FX chain. Ideally there will be no numbers, sliders, knobs, meters, or other traditional plugin UI elements in my plugin. This is inspired by my interests in developing unique new audiovisual interactive interfaces for music consumption/distribution, being a singer-songwriter myself, and needing to often work on vocal production. Check out [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5] for some modal and aesthetic reference. <br />
<br />
<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I tend to web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and some a few spectral delay vocal effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter . In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more. <br />
<br />
<br />
<br />
===Week 2===<br />
<br />
I got reacquainted with the JUCE plugin development toolchain: updated JUCE, downloaded the new Projucer, built some plugins in XCode, reminded myself how to debug plugins in my DAW via XCode, and more. <br />
<br />
<br />
I was recently interested in developing a de-esser plugin. I did some research on how de-essers work. [https://music.tutsplus.com/articles/how-to-create-a-de-esser-from-scratch-in-logic-or-any-daw-for-that-matter--audio-3793 This article] laid out how a de-esser might function in a very intuitive way, explaining the algorithm in terms of traditional plugins and fx chains. This is the algorithm they outline:<br />
# '''Apply parametric EQ (peak filter)''' to input signal with frequency between '''5-8 kHz''' and a '''very high Q (like 100+)'''. This will produce a signal that peaks really aggressively on sibilant "s" sounds, since "s" frequency content is packed in to the 5-8kHz range. <br />
# '''Compressor original input signal with sibilant filtered signal as sidechain input.''' Because the sidechain input peaks aggressively on "s" sounds, the compressor will compress the original input only during those "s" sounds. The envelope of the compressor needs to be really quick to cut out "s" sounds as soon as they happen, and before they are perceptible. In experiments, I had to set the compressor's attack to approximately 0.15ms and release to 15-20ms to get smooth but effective compression. <br />
<br />
<br />
I'm not trying to build a whole plugin UI completely from scratch at this stage. Looking around for code to work from, I was lucky enough to find an [https://github.com/p-hlp/CTAGDRC open source dynamic range compressor JUCE plugin] by Creative Technologies. I ported a state-variable filter implementation (based on STK) from a previous project ([https://github.com/spiricom/LEAF/ LEAF] into this plugin and was able to get the de-esser algorithm above to work, along with a couple extra knobs and buttons in the UI. <br />
<br />
<br />
This side project in to de-essing will be useful in developing a collection of vocal FX for my 320C final project plugin.<br />
<br />
===Week 3===</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=22853Mulshine:320C2021-04-12T22:12:50Z<p>Mulshine: /* My Ever-Evolving 320C Project Spec */</p>
<hr />
<div>==Music 320C: Software Projects in Music/Audio Signal Processing==<br />
<br />
===My Ever-Evolving 320C Project Idea===<br />
<br />
I want to make a<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I tend to web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and some a few spectral delay vocal effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter . In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more. <br />
<br />
===Week 2===<br />
<br />
I got reacquainted with the JUCE plugin development toolchain: updated JUCE, downloaded the new Projucer, built some plugins in XCode, reminded myself how to debug plugins in my DAW via XCode, and more. <br />
<br />
<br />
I was recently interested in developing a de-esser plugin. I did some research on how de-essers work. [https://music.tutsplus.com/articles/how-to-create-a-de-esser-from-scratch-in-logic-or-any-daw-for-that-matter--audio-3793 This article] laid out how a de-esser might function in a very intuitive way, explaining the algorithm in terms of traditional plugins and fx chains. This is the algorithm they outline:<br />
# '''Apply parametric EQ (peak filter)''' to input signal with frequency between '''5-8 kHz''' and a '''very high Q (like 100+)'''. This will produce a signal that peaks really aggressively on sibilant "s" sounds, since "s" frequency content is packed in to the 5-8kHz range. <br />
# '''Compressor original input signal with sibilant filtered signal as sidechain input.''' Because the sidechain input peaks aggressively on "s" sounds, the compressor will compress the original input only during those "s" sounds. The envelope of the compressor needs to be really quick to cut out "s" sounds as soon as they happen, and before they are perceptible. In experiments, I had to set the compressor's attack to approximately 0.15ms and release to 15-20ms to get smooth but effective compression. <br />
<br />
<br />
I'm not trying to build a whole plugin UI completely from scratch at this stage. Looking around for code to work from, I was lucky enough to find an [https://github.com/p-hlp/CTAGDRC open source dynamic range compressor JUCE plugin] by Creative Technologies. I ported a state-variable filter implementation (based on STK) from a previous project ([https://github.com/spiricom/LEAF/ LEAF] into this plugin and was able to get the de-esser algorithm above to work, along with a couple extra knobs and buttons in the UI. <br />
<br />
<br />
This side project in to de-essing will be useful in developing a collection of vocal FX for my 320C final project plugin.<br />
<br />
===Week 3===</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=22852Mulshine:320C2021-04-12T22:11:56Z<p>Mulshine: /* Music 320C: Software Projects in Music/Audio Signal Processing */</p>
<hr />
<div>==Music 320C: Software Projects in Music/Audio Signal Processing==<br />
<br />
===My Ever-Evolving 320C Project Spec===<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I tend to web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and some a few spectral delay vocal effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter . In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more. <br />
<br />
===Week 2===<br />
<br />
I got reacquainted with the JUCE plugin development toolchain: updated JUCE, downloaded the new Projucer, built some plugins in XCode, reminded myself how to debug plugins in my DAW via XCode, and more. <br />
<br />
<br />
I was recently interested in developing a de-esser plugin. I did some research on how de-essers work. [https://music.tutsplus.com/articles/how-to-create-a-de-esser-from-scratch-in-logic-or-any-daw-for-that-matter--audio-3793 This article] laid out how a de-esser might function in a very intuitive way, explaining the algorithm in terms of traditional plugins and fx chains. This is the algorithm they outline:<br />
# '''Apply parametric EQ (peak filter)''' to input signal with frequency between '''5-8 kHz''' and a '''very high Q (like 100+)'''. This will produce a signal that peaks really aggressively on sibilant "s" sounds, since "s" frequency content is packed in to the 5-8kHz range. <br />
# '''Compressor original input signal with sibilant filtered signal as sidechain input.''' Because the sidechain input peaks aggressively on "s" sounds, the compressor will compress the original input only during those "s" sounds. The envelope of the compressor needs to be really quick to cut out "s" sounds as soon as they happen, and before they are perceptible. In experiments, I had to set the compressor's attack to approximately 0.15ms and release to 15-20ms to get smooth but effective compression. <br />
<br />
<br />
I'm not trying to build a whole plugin UI completely from scratch at this stage. Looking around for code to work from, I was lucky enough to find an [https://github.com/p-hlp/CTAGDRC open source dynamic range compressor JUCE plugin] by Creative Technologies. I ported a state-variable filter implementation (based on STK) from a previous project ([https://github.com/spiricom/LEAF/ LEAF] into this plugin and was able to get the de-esser algorithm above to work, along with a couple extra knobs and buttons in the UI. <br />
<br />
<br />
This side project in to de-essing will be useful in developing a collection of vocal FX for my 320C final project plugin.<br />
<br />
===Week 3===</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=22851Mulshine:320C2021-04-12T22:11:23Z<p>Mulshine: /* Music 320C: Software Projects in Music/Audio Signal Processing */</p>
<hr />
<div>==Music 320C: Software Projects in Music/Audio Signal Processing==<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I tend to web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and some a few spectral delay vocal effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter . In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more. <br />
<br />
===Week 2===<br />
<br />
I got reacquainted with the JUCE plugin development toolchain: updated JUCE, downloaded the new Projucer, built some plugins in XCode, reminded myself how to debug plugins in my DAW via XCode, and more. <br />
<br />
<br />
I was recently interested in developing a de-esser plugin. I did some research on how de-essers work. [https://music.tutsplus.com/articles/how-to-create-a-de-esser-from-scratch-in-logic-or-any-daw-for-that-matter--audio-3793 This article] laid out how a de-esser might function in a very intuitive way, explaining the algorithm in terms of traditional plugins and fx chains. This is the algorithm they outline:<br />
# '''Apply parametric EQ (peak filter)''' to input signal with frequency between '''5-8 kHz''' and a '''very high Q (like 100+)'''. This will produce a signal that peaks really aggressively on sibilant "s" sounds, since "s" frequency content is packed in to the 5-8kHz range. <br />
# '''Compressor original input signal with sibilant filtered signal as sidechain input.''' Because the sidechain input peaks aggressively on "s" sounds, the compressor will compress the original input only during those "s" sounds. The envelope of the compressor needs to be really quick to cut out "s" sounds as soon as they happen, and before they are perceptible. In experiments, I had to set the compressor's attack to approximately 0.15ms and release to 15-20ms to get smooth but effective compression. <br />
<br />
<br />
I'm not trying to build a whole plugin UI completely from scratch at this stage. Looking around for code to work from, I was lucky enough to find an [https://github.com/p-hlp/CTAGDRC open source dynamic range compressor JUCE plugin] by Creative Technologies. I ported a state-variable filter implementation (based on STK) from a previous project ([https://github.com/spiricom/LEAF/ LEAF] into this plugin and was able to get the de-esser algorithm above to work, along with a couple extra knobs and buttons in the UI. <br />
<br />
<br />
This side project in to de-essing will be useful in developing a collection of vocal FX for my 320C final project plugin.<br />
<br />
===Week 3===</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=22844Mulshine:320C2021-04-12T21:55:16Z<p>Mulshine: /* Week 2 */</p>
<hr />
<div>==Music 320C: Software Projects in Music/Audio Signal Processing==<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I tend to web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and some a few spectral delay vocal effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter . In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more. <br />
<br />
===Week 2===<br />
<br />
I got reacquainted with the JUCE plugin development toolchain: updated JUCE, downloaded the new Projucer, built some plugins in XCode, reminded myself how to debug plugins in my DAW via XCode, and more. <br />
<br />
<br />
I was recently interested in developing a de-esser plugin. I did some research on how de-essers work. [https://music.tutsplus.com/articles/how-to-create-a-de-esser-from-scratch-in-logic-or-any-daw-for-that-matter--audio-3793 This article] laid out how a de-esser might function in a very intuitive way, explaining the algorithm in terms of traditional plugins and fx chains. This is the algorithm they outline:<br />
# '''Apply parametric EQ (peak filter)''' to input signal with frequency between '''5-8 kHz''' and a '''very high Q''' (like 100+). This will produce a signal that peaks really aggressively on sibilant "s" sounds, since "s" frequency content is packed in to the 5-8kHz range. <br />
# '''<br />
<br />
===Week 3===</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=22843Mulshine:320C2021-04-12T21:53:39Z<p>Mulshine: /* Week 2 */</p>
<hr />
<div>==Music 320C: Software Projects in Music/Audio Signal Processing==<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I tend to web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and some a few spectral delay vocal effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter . In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more. <br />
<br />
===Week 2===<br />
<br />
I got reacquainted with the JUCE plugin development toolchain: updated JUCE, downloaded the new Projucer, built some plugins in XCode, reminded myself how to debug plugins in my DAW via XCode, and more. <br />
<br />
<br />
I was recently interested in developing a de-esser plugin. I did some research on how de-essers work. [https://music.tutsplus.com/articles/how-to-create-a-de-esser-from-scratch-in-logic-or-any-daw-for-that-matter--audio-3793 This article] laid out how a de-esser might function in a very intuitive way, explaining the algorithm in terms of traditional plugins and fx chains. This is the algorithm they outline:<br />
# '''Apply parametric EQ (peak filter) to input signal with frequency between 5-8 kHz and a very high Q (like 100+).'''<br />
<br />
===Week 3===</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=22841Mulshine:320C2021-04-12T21:53:12Z<p>Mulshine: /* Week 2 */</p>
<hr />
<div>==Music 320C: Software Projects in Music/Audio Signal Processing==<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I tend to web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and some a few spectral delay vocal effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter . In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more. <br />
<br />
===Week 2===<br />
<br />
I got reacquainted with the JUCE plugin development toolchain: updated JUCE, downloaded the new Projucer, built some plugins in XCode, reminded myself how to debug plugins in my DAW via XCode, and more. <br />
<br />
<br />
I was recently interested in developing a de-esser plugin. I did some research on how de-essers work. [https://music.tutsplus.com/articles/how-to-create-a-de-esser-from-scratch-in-logic-or-any-daw-for-that-matter--audio-3793 This article] laid out how a de-esser might function in a very intuitive way, explaining the algorithm in terms of traditional plugins and fx chains. This is the algorithm they outline:<br />
# Apply parametric EQ (peak filter) to input signal with frequency between 5-8 kHz and a very high Q (like 100+)<br />
**This will<br />
<br />
===Week 3===</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=22839Mulshine:320C2021-04-12T20:00:37Z<p>Mulshine: /* Music 320C: Software Projects in Music/Audio Signal Processing */</p>
<hr />
<div>==Music 320C: Software Projects in Music/Audio Signal Processing==<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I tend to web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and some a few spectral delay vocal effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter . In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more. <br />
<br />
===Week 2===<br />
<br />
===Week 3===</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=22838Mulshine:320C2021-04-12T19:59:57Z<p>Mulshine: /* Music 320C: Software Projects in Music/Audio Signal Processing */</p>
<hr />
<div>==Music 320C: Software Projects in Music/Audio Signal Processing==<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I tend to web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and some a few spectral delay vocal effects in Python for my final project in 320B last quarter [[File:Example.jpg]]. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter . In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more. <br />
<br />
===Week 2===<br />
<br />
===Week 3===</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=22837Mulshine:320C2021-04-12T19:59:15Z<p>Mulshine: /* Music 320C: Software Projects in Music/Audio Signal Processing */</p>
<hr />
<div>==Music 320C: Software Projects in Music/Audio Signal Processing==<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I tend to web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and some a few spectral delay vocal effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter . In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more. <br />
<br />
===Week 2===<br />
<br />
===Week 3===</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=22836Mulshine:320C2021-04-12T19:58:54Z<p>Mulshine: /* Music 320C: Software Projects in Music/Audio Signal Processing */</p>
<hr />
<div>==Music 320C: Software Projects in Music/Audio Signal Processing==<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I tend to web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and some a few spectral delay vocal effects in Python for my final project in 320B last quarter. <br />
<br />
<br />
Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter . In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more. <br />
<br />
===Week 2===<br />
<br />
===Week 3===</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=22835Mulshine:320C2021-04-12T19:57:47Z<p>Mulshine: /* Music 320C: Software Projects in Music/Audio Signal Processing */</p>
<hr />
<div>==Music 320C: Software Projects in Music/Audio Signal Processing==<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I tend to web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5]). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser], among other tunes). In 320B last quarter, I developed a custom Vocal EQ and some spectral delay vocal effects in Python for my final projects. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI. In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more. <br />
<br />
===Week 2===<br />
<br />
===Week 3===</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=22834Mulshine:320C2021-04-12T19:57:21Z<p>Mulshine: /* Music 320C: Software Projects in Music/Audio Signal Processing */</p>
<hr />
<div>==Music 320C: Software Projects in Music/Audio Signal Processing==<br />
<br />
===Week 1===<br />
<br />
In week 1, I thought a lot about what I wanted to make for this course. I was enrolled [http://graphics.stanford.edu/courses/cs448z/ CS 448Z: Physically-Based Animation and Sound] with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C. <br />
<br />
I still desired to make a JUCE plugin that combined audio and visuals. I tend to web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see [https://www.mikemulshine.com/hairdresser Hairdresser] and [https://www.mikemulshine.com/mp5/mp5 MP5). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see [https://www.youtube.com/watch?v=Nn0tbg-zSNo Sunny Day] or [https://www.youtube.com/watch?v=NV5llrGhd6o Hairdresser, among other tunes). In 320B last quarter, I developed a custom Vocal EQ and some spectral delay vocal effects in Python for my final projects. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI. In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more. <br />
<br />
===Week 2===<br />
<br />
===Week 3===</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=Mulshine:320C&diff=22833Mulshine:320C2021-04-12T19:40:32Z<p>Mulshine: Created page with "==Music 320C: Software Projects in Music/Audio Signal Processing== ===Week 1=== ===Week 2=== ===Week 3==="</p>
<hr />
<div>==Music 320C: Software Projects in Music/Audio Signal Processing==<br />
<br />
===Week 1===<br />
<br />
===Week 2===<br />
<br />
===Week 3===</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=User:Mulshine&diff=22832User:Mulshine2021-04-12T19:39:34Z<p>Mulshine: /* Mike Mulshine */</p>
<hr />
<div>==Mike Mulshine==<br />
<br />
I am a songwriter, musician, and developer of audiovisual interactive musical interfaces. Check out my [https://www.mikemulshine.com website].<br />
<br />
Here are some Wiki links to work I have done in courses at CCRMA:<br />
<br />
* [[320C]]</div>Mulshinehttps://ccrma.stanford.edu/mediawiki/index.php?title=User:Mulshine&diff=22831User:Mulshine2021-04-12T19:39:03Z<p>Mulshine: /* Mike Mulshine */</p>
<hr />
<div>==Mike Mulshine==<br />
<br />
I am a songwriter, musician, and developer of audiovisual interactive musical interfaces. Check out my [https://www.mikemulshine.com website].<br />
<br />
Here are some Wiki links to work I have done in courses at CCRMA:<br />
<br />
[[320C]]</div>Mulshine