https://ccrma.stanford.edu/mediawiki/api.php?action=feedcontributions&user=Favis&feedformat=atomCCRMA Wiki - User contributions [en]2024-03-28T11:23:01ZUser contributionsMediaWiki 1.24.1https://ccrma.stanford.edu/mediawiki/index.php?title=Favis220c&diff=16590Favis220c2014-06-10T19:53:53Z<p>Favis: </p>
<hr />
<div>'''Overview:'''<br />
For my 220C project, I am making vocoder renditions of choral music. The vocoder takes spectral qualities of the human voice and translates them to an instrument, creating what seems to be a "talking instrument" or "robot voice." Over the course of this project I aim to examine the effects of vibrato, pitch, vowels, and spectrum in the dry signal on the vocoded sound and discover how choral music translates to vocoded vocals. <br />
<br />
I will be vocoding three choral pieces: Maurice Duruflé's "Ubi Caritas," Wolfgang Mozart's "Lacrimosa," and Eric Whitacre's "Lux Aurumque."<br />
<br />
<br />
'''Vocoder:'''<br />
I made a basic vocoder in Max MSP, but I am not satisfied with the quality of sound and annunciation of vowels/consonants. Since the goal of the project is to examine the effect of dry parameters on vocoded sound, the origin/originality of the vocoder is not necessarily a priority to me. I may end up toying with an already-made vocoder in Logic for the purposes of efficiency, timbre, and sound quality. <br />
<br />
<br />
'''Methodology:'''<br />
One singer is assigned to each voice for each song. Every song has bass, tenor, alto, and soprano lines, and some lines have multiple voices per line. The group will sing together during the recording sessions, but each singer will have an individual microphone placed closely to his/her mouth to isolate his/her voice from the group. The goal is to allow singers to blend during the piece while still recording the voices separately for individual vocoding. <br />
<br />
The following singers have generously agreed to record vocals for my project:<br />
<br />
<br />
[[Ubi Caritas]]<br />
<br />
Bass: Joel Chapman<br />
<br />
Bass: Evan Gitterman<br />
<br />
Tenor: Andrew Forsyth<br />
<br />
Tenor: Will Watson<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Mia Farinelli<br />
<br />
<br />
[[Lux Aurumque]]<br />
<br />
Bass: Joel Chapman<br />
<br />
Bass: Evan Gitterman<br />
<br />
Tenor: Andrew Forsyth<br />
<br />
Tenor: Will Watson<br />
<br />
Alto: Laura Austin<br />
<br />
Alto: Michelle Jia<br />
<br />
Soprano: Mia Farinelli<br />
<br />
Soprano: Grace Laboy<br />
<br />
<br />
[[Lacrimosa]]<br />
<br />
Bass: Joel Chapman<br />
<br />
Tenor: Andrew Forsyth<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Grace Laboy<br />
<br />
Piano: Nayantara Jain<br />
<br />
<br />
'''Rehearsals and Testing:'''<br />
We held two rehearsals for each piece before recording. Both sessions went very swiftly because the singers sight-read each piece exceptionally well. During the rehearsals, I recorded some dry samples from Andrew Forsyth, the tenor voice. I had him sing phrases with and without vibrato. I also had him sing a short phrase (from Ubi Caritas) all on one note and then another take on another note. After vocoding each sample, I noticed the following:<br />
<br />
1) Vibrato creates a nice spectral modulation in the vocoded voice. The amplitude and high frequencies flutter and add a pleasant human quality to an otherwise robotic voice.<br />
<br />
2) When a vocoded note does not match the sung note from the dry sample, the formant of the vocoded voice is altered. Formant of the vocoded sample does not match that of the dry sample, and the vocoded sample sounds nasal as a result.<br />
<br />
3) Consonants are well defined by the vocoder. Therefore, it is important to sample experienced singers who know how to sing consonants in traditional choral pieces (rolled r's, short s's, etc…). <br />
<br />
<br />
'''Plan for Voice Vocoding:'''<br />
I will go through each vocal track and play each part's appropriate notes on the keyboard vocoder. This may be difficult and tedious, but I can use MIDI in Logic to edit/split/move notes according to the rhythm of the individual voices. Choosing good reverb will also be essential for blending and authenticity.<br />
<br />
<br />
'''Recording Process:'''<br />
CCRMA Professor Jonathan Abel has microphones that effectively isolate individual voices in a choral group. I spoke with him about using these mics and arranged a meeting with him in which he would give me the mics and run me through how to use them. This meeting was supposed to take place on the day of the recording session, but he did not show. As a consequence, we had to use a combination of condenser and dynamic mics in CCRMA studio and arrange them in a circle (in order to best isolate the individual voices). This setup was far from ideal, but we made due. All in all, the dry samples were satisfactory. However, some of the dry samples contained noticeable "bleeding," meaning that other voices spilled into the microphone that was intended to capture only one specific voice. <br />
<br />
<br />
'''Vocoding Process:'''<br />
The vocoding process itself went very swiftly. The singers' tone, vowels, and consonants were generally maintained in the vocoded product. For samples that had significant bleeding, I had to shift some formants (mainly down, to avoid nasal-sounding timbres) and adjust the formant stretch. <br />
<br />
<br />
'''Reverb:'''<br />
I knew all along that reverb was essential in simulating the timbre and dynamics of a real choral group. My first thought was to use impulse response in Logic Pro to create a customized reverb (perhaps using Stanford's Memorial Church as the venue). However, I decided to use an 8-input Faust reverb that I used in my 220B class with the help of Romain Michon. This reverb made most sense because 1) I had more control over the highs and lows and 2) the CCRMA stage has 8 inputs.<br />
<br />
<br />
'''Ubi Caritas:'''<br />
https://www.youtube.com/watch?v=aNsazuVyemw&feature=youtu.be</div>Favishttps://ccrma.stanford.edu/mediawiki/index.php?title=Favis220c&diff=16589Favis220c2014-06-10T19:00:28Z<p>Favis: </p>
<hr />
<div>'''Overview:'''<br />
For my 220C project, I am making vocoder renditions of choral music. The vocoder takes spectral qualities of the human voice and translates them to an instrument, creating what seems to be a "talking instrument" or "robot voice." Over the course of this project I aim to examine the effects of vibrato, pitch, vowels, and spectrum in the dry signal on the vocoded sound and discover how choral music translates to vocoded vocals. <br />
<br />
I will be vocoding three choral pieces: Maurice Duruflé's "Ubi Caritas," Wolfgang Mozart's "Lacrimosa," and Eric Whitacre's "Lux Aurumque."<br />
<br />
<br />
'''Vocoder:'''<br />
I made a basic vocoder in Max MSP, but I am not satisfied with the quality of sound and annunciation of vowels/consonants. Since the goal of the project is to examine the effect of dry parameters on vocoded sound, the origin/originality of the vocoder is not necessarily a priority to me. I may end up toying with an already-made vocoder in Logic for the purposes of efficiency, timbre, and sound quality. <br />
<br />
<br />
'''Methodology:'''<br />
One singer is assigned to each voice for each song. Every song has bass, tenor, alto, and soprano lines, and some lines have multiple voices per line. The group will sing together during the recording sessions, but each singer will have an individual microphone placed closely to his/her mouth to isolate his/her voice from the group. The goal is to allow singers to blend during the piece while still recording the voices separately for individual vocoding. <br />
<br />
The following singers have generously agreed to record vocals for my project:<br />
<br />
<br />
[[Ubi Caritas]]<br />
<br />
Bass: Joel Chapman<br />
<br />
Bass: Evan Gitterman<br />
<br />
Tenor: Andrew Forsyth<br />
<br />
Tenor: Will Watson<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Mia Farinelli<br />
<br />
<br />
[[Lux Aurumque]]<br />
<br />
Bass: Joel Chapman<br />
<br />
Bass: Evan Gitterman<br />
<br />
Tenor: Andrew Forsyth<br />
<br />
Tenor: Will Watson<br />
<br />
Alto: Laura Austin<br />
<br />
Alto: Michelle Jia<br />
<br />
Soprano: Mia Farinelli<br />
<br />
Soprano: Grace Laboy<br />
<br />
<br />
[[Lacrimosa]]<br />
<br />
Bass: Joel Chapman<br />
<br />
Tenor: Andrew Forsyth<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Grace Laboy<br />
<br />
Piano: Nayantara Jain<br />
<br />
<br />
'''Rehearsals and Testing:'''<br />
We held two rehearsals for each piece before recording. Both sessions went very swiftly because the singers sight-read each piece exceptionally well. During the rehearsals, I recorded some dry samples from Andrew Forsyth, the tenor voice. I had him sing phrases with and without vibrato. I also had him sing a short phrase (from Ubi Caritas) all on one note and then another take on another note. After vocoding each sample, I noticed the following:<br />
<br />
1) Vibrato creates a nice spectral modulation in the vocoded voice. The amplitude and high frequencies flutter and add a pleasant human quality to an otherwise robotic voice.<br />
<br />
2) When a vocoded note does not match the sung note from the dry sample, the formant of the vocoded voice is altered. Formant of the vocoded sample does not match that of the dry sample, and the vocoded sample sounds nasal as a result.<br />
<br />
3) Consonants are well defined by the vocoder. Therefore, it is important to sample experienced singers who know how to sing consonants in traditional choral pieces (rolled r's, short s's, etc…). <br />
<br />
<br />
'''Plan for Voice Vocoding:'''<br />
I will go through each vocal track and play each part's appropriate notes on the keyboard vocoder. This may be difficult and tedious, but I can use MIDI in Logic to edit/split/move notes according to the rhythm of the individual voices. Choosing good reverb will also be essential for blending and authenticity.<br />
<br />
<br />
'''Recording Process:'''<br />
CCRMA Professor Jonathan Abel has microphones that effectively isolate individual voices in a choral group. I spoke with him about using these mics and arranged a meeting with him in which he would give me the mics and run me through how to use them. This meeting was supposed to take place on the day of the recording session, but he did not show. As a consequence, we had to use a combination of condenser and dynamic mics in CCRMA studio and arrange them in a circle (in order to best isolate the individual voices). This setup was far from ideal, but we made due. All in all, the dry samples were satisfactory. However, some of the dry samples contained noticeable "bleeding," meaning that other voices spilled into the microphone that was intended to capture only one specific voice. <br />
<br />
<br />
'''Vocoding Process:'''<br />
The vocoding process itself went very swiftly. The singers' tone, vowels, and consonants were generally maintained in the vocoded product. For samples that had significant bleeding, I had to shift some formants (mainly down, to avoid nasal-sounding timbres) and adjust the formant stretch. <br />
<br />
<br />
'''Reverb:'''<br />
I knew all along that reverb was essential in simulating the timbre and dynamics of a real choral group. My first thought was to use impulse response in Logic Pro to create a customized reverb (perhaps using Stanford's Memorial Church as the venue). However, I decided to use an 8-input Faust reverb that I used in my 220B class with the help of Romain Michon. This reverb made most sense because 1) I had more control over the highs and lows and 2) the CCRMA stage has 8 inputs.</div>Favishttps://ccrma.stanford.edu/mediawiki/index.php?title=Favis220c&diff=16588Favis220c2014-06-10T19:00:06Z<p>Favis: </p>
<hr />
<div>'''Overview:'''<br />
For my 220C project, I am making vocoder renditions of choral music. The vocoder takes spectral qualities of the human voice and translates them to an instrument, creating what seems to be a "talking instrument" or "robot voice." Over the course of this project I aim to examine the effects of vibrato, pitch, vowels, and spectrum in the dry signal on the vocoded sound and discover how choral music translates to vocoded vocals. <br />
<br />
I will be vocoding three choral pieces: Maurice Duruflé's "Ubi Caritas," Wolfgang Mozart's "Lacrimosa," and Eric Whitacre's "Lux Aurumque."<br />
<br />
<br />
'''Vocoder:'''<br />
I made a basic vocoder in Max MSP, but I am not satisfied with the quality of sound and annunciation of vowels/consonants. Since the goal of the project is to examine the effect of dry parameters on vocoded sound, the origin/originality of the vocoder is not necessarily a priority to me. I may end up toying with an already-made vocoder in Logic for the purposes of efficiency, timbre, and sound quality. <br />
<br />
<br />
'''Methodology:'''<br />
One singer is assigned to each voice for each song. Every song has bass, tenor, alto, and soprano lines, and some lines have multiple voices per line. The group will sing together during the recording sessions, but each singer will have an individual microphone placed closely to his/her mouth to isolate his/her voice from the group. The goal is to allow singers to blend during the piece while still recording the voices separately for individual vocoding. <br />
<br />
The following singers have generously agreed to record vocals for my project:<br />
<br />
<br />
[[Ubi Caritas]]<br />
<br />
Bass: Joel Chapman<br />
<br />
Bass: Evan Gitterman<br />
<br />
Tenor: Andrew Forsyth<br />
<br />
Tenor: Will Watson<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Mia Farinelli<br />
<br />
<br />
[[Lux Aurumque]]<br />
<br />
Bass: Joel Chapman<br />
<br />
Bass: Evan Gitterman<br />
<br />
Tenor: Andrew Forsyth<br />
<br />
Tenor: Will Watson<br />
<br />
Alto: Laura Austin<br />
<br />
Alto: Michelle Jia<br />
<br />
Soprano: Mia Farinelli<br />
<br />
Soprano: Grace Laboy<br />
<br />
<br />
[[Lacrimosa]]<br />
<br />
Bass: Joel Chapman<br />
<br />
Tenor: Andrew Forsyth<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Grace Laboy<br />
<br />
Piano: Nayantara Jain<br />
<br />
<br />
'''Rehearsals and Testing:'''<br />
We held two rehearsals for each piece before recording. Both sessions went very swiftly because the singers sight-read each piece exceptionally well. During the rehearsals, I recorded some dry samples from Andrew Forsyth, the tenor voice. I had him sing phrases with and without vibrato. I also had him sing a short phrase (from Ubi Caritas) all on one note and then another take on another note. After vocoding each sample, I noticed the following:<br />
<br />
1) Vibrato creates a nice spectral modulation in the vocoded voice. The amplitude and high frequencies flutter and add a pleasant human quality to an otherwise robotic voice.<br />
<br />
2) When a vocoded note does not match the sung note from the dry sample, the formant of the vocoded voice is altered. Formant of the vocoded sample does not match that of the dry sample, and the vocoded sample sounds nasal as a result.<br />
<br />
3) Consonants are well defined by the vocoder. Therefore, it is important to sample experienced singers who know how to sing consonants in traditional choral pieces (rolled r's, short s's, etc…). <br />
<br />
<br />
'''Plan for Voice Vocoding:'''<br />
I will go through each vocal track and play each part's appropriate notes on the keyboard vocoder. This may be difficult and tedious, but I can use MIDI in Logic to edit/split/move notes according to the rhythm of the individual voices. Choosing good reverb will also be essential for blending and authenticity.<br />
<br />
<br />
'''Recording Process:'''<br />
CCRMA Professor Jonathan Abel has microphones that effectively isolate individual voices in a choral group. I spoke with him about using these mics and arranged a meeting with him in which he would give me the mics and run me through how to use them. This meeting was supposed to take place on the day of the recording session, but he did not show. As a consequence, we had to use a combination of condenser and dynamic mics in CCRMA studio and arrange them in a circle (in order to best isolate the individual voices). This setup was far from ideal, but we made due. All in all, the dry samples were satisfactory. However, some of the dry samples contained noticeable "bleeding," meaning that other voices spilled into the microphone that was intended to capture only one specific voice. <br />
<br />
<br />
'''Vocoding Process:'''<br />
The vocoding process itself went very swiftly. The singers' tone, vowels, and consonants were generally maintained in the vocoded product. For samples that had significant bleeding, I had to shift some formants (mainly down, to avoid nasal-sounding timbres) and adjust the formant stretch. <br />
<br />
<br />
'''Reverb:'''<br />
I knew all along that reverb was essential in simulating the timbre and dynamics of a real choral group. My first thought was to use impulse response in Logic Pro to create a customized reverb (perhaps using Stanford's Memorial Church as the venue). However, I decided to use an 8-input Faust reverb that I used in my 220B class with the help of Romain Michon. This reverb made most sense because 1) I had more control over the highs and lows and 2) the CCRMA stage has 8 inputs.<br />
<br />
<br />
/Users/fredavis/Documents/Ubi Caritas 220c.dv</div>Favishttps://ccrma.stanford.edu/mediawiki/index.php?title=Favis220c&diff=16580Favis220c2014-06-09T19:53:36Z<p>Favis: </p>
<hr />
<div>'''Overview:'''<br />
For my 220C project, I am making vocoder renditions of choral music. The vocoder takes spectral qualities of the human voice and translates them to an instrument, creating what seems to be a "talking instrument" or "robot voice." Over the course of this project I aim to examine the effects of vibrato, pitch, vowels, and spectrum in the dry signal on the vocoded sound and discover how choral music translates to vocoded vocals. <br />
<br />
I will be vocoding three choral pieces: Maurice Duruflé's "Ubi Caritas," Wolfgang Mozart's "Lacrimosa," and Eric Whitacre's "Lux Aurumque."<br />
<br />
<br />
'''Vocoder:'''<br />
I made a basic vocoder in Max MSP, but I am not satisfied with the quality of sound and annunciation of vowels/consonants. Since the goal of the project is to examine the effect of dry parameters on vocoded sound, the origin/originality of the vocoder is not necessarily a priority to me. I may end up toying with an already-made vocoder in Logic for the purposes of efficiency, timbre, and sound quality. <br />
<br />
<br />
'''Methodology:'''<br />
One singer is assigned to each voice for each song. Every song has bass, tenor, alto, and soprano lines, and some lines have multiple voices per line. The group will sing together during the recording sessions, but each singer will have an individual microphone placed closely to his/her mouth to isolate his/her voice from the group. The goal is to allow singers to blend during the piece while still recording the voices separately for individual vocoding. <br />
<br />
The following singers have generously agreed to record vocals for my project:<br />
<br />
<br />
[[Ubi Caritas]]<br />
<br />
Bass: Joel Chapman<br />
<br />
Bass: Evan Gitterman<br />
<br />
Tenor: Andrew Forsyth<br />
<br />
Tenor: Will Watson<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Mia Farinelli<br />
<br />
<br />
[[Lux Aurumque]]<br />
<br />
Bass: Joel Chapman<br />
<br />
Bass: Evan Gitterman<br />
<br />
Tenor: Andrew Forsyth<br />
<br />
Tenor: Will Watson<br />
<br />
Alto: Laura Austin<br />
<br />
Alto: Michelle Jia<br />
<br />
Soprano: Mia Farinelli<br />
<br />
Soprano: Grace Laboy<br />
<br />
<br />
[[Lacrimosa]]<br />
<br />
Bass: Joel Chapman<br />
<br />
Tenor: Andrew Forsyth<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Grace Laboy<br />
<br />
Piano: Nayantara Jain<br />
<br />
<br />
'''Rehearsals and Testing:'''<br />
We held two rehearsals for each piece before recording. Both sessions went very swiftly because the singers sight-read each piece exceptionally well. During the rehearsals, I recorded some dry samples from Andrew Forsyth, the tenor voice. I had him sing phrases with and without vibrato. I also had him sing a short phrase (from Ubi Caritas) all on one note and then another take on another note. After vocoding each sample, I noticed the following:<br />
<br />
1) Vibrato creates a nice spectral modulation in the vocoded voice. The amplitude and high frequencies flutter and add a pleasant human quality to an otherwise robotic voice.<br />
<br />
2) When a vocoded note does not match the sung note from the dry sample, the formant of the vocoded voice is altered. Formant of the vocoded sample does not match that of the dry sample, and the vocoded sample sounds nasal as a result.<br />
<br />
3) Consonants are well defined by the vocoder. Therefore, it is important to sample experienced singers who know how to sing consonants in traditional choral pieces (rolled r's, short s's, etc…). <br />
<br />
<br />
'''Plan for Voice Vocoding:'''<br />
I will go through each vocal track and play each part's appropriate notes on the keyboard vocoder. This may be difficult and tedious, but I can use MIDI in Logic to edit/split/move notes according to the rhythm of the individual voices. Choosing good reverb will also be essential for blending and authenticity.<br />
<br />
<br />
'''Recording Process:'''<br />
CCRMA Professor Jonathan Abel has microphones that effectively isolate individual voices in a choral group. I spoke with him about using these mics and arranged a meeting with him in which he would give me the mics and run me through how to use them. This meeting was supposed to take place on the day of the recording session, but he did not show. As a consequence, we had to use a combination of condenser and dynamic mics in CCRMA studio and arrange them in a circle (in order to best isolate the individual voices). This setup was far from ideal, but we made due. All in all, the dry samples were satisfactory. However, some of the dry samples contained noticeable "bleeding," meaning that other voices spilled into the microphone that was intended to capture only one specific voice. <br />
<br />
<br />
'''Vocoding Process:'''<br />
The vocoding process itself went very swiftly. The singers' tone, vowels, and consonants were generally maintained in the vocoded product. For samples that had significant bleeding, I had to shift some formants (mainly down, to avoid nasal-sounding timbres) and adjust the formant stretch. <br />
<br />
<br />
'''Reverb:'''<br />
I knew all along that reverb was essential in simulating the timbre and dynamics of a real choral group. My first thought was to use impulse response in Logic Pro to create a customized reverb (perhaps using Stanford's Memorial Church as the venue). However, I decided to use an 8-input Faust reverb that I used in my 220B class with the help of Romain Michon. This reverb made most sense because 1) I had more control over the highs and lows and 2) the CCRMA stage has 8 inputs.</div>Favishttps://ccrma.stanford.edu/mediawiki/index.php?title=Favis220c&diff=16579Favis220c2014-06-09T19:53:19Z<p>Favis: </p>
<hr />
<div>'''Overview:'''<br />
For my 220C project, I am making vocoder renditions of choral music. The vocoder takes spectral qualities of the human voice and translates them to an instrument, creating what seems to be a "talking instrument" or "robot voice." Over the course of this project I aim to examine the effects of vibrato, pitch, vowels, and spectrum in the dry signal on the vocoded sound and discover how choral music translates to vocoded vocals. <br />
<br />
I will be vocoding three choral pieces: Maurice Duruflé's "Ubi Caritas," Wolfgang Mozart's "Lacrimosa," and Eric Whitacre's "Lux Aurumque."<br />
<br />
<br />
'''Vocoder:'''<br />
I made a basic vocoder in Max MSP, but I am not satisfied with the quality of sound and annunciation of vowels/consonants. Since the goal of the project is to examine the effect of dry parameters on vocoded sound, the origin/originality of the vocoder is not necessarily a priority to me. I may end up toying with an already-made vocoder in Logic for the purposes of efficiency, timbre, and sound quality. <br />
<br />
<br />
'''Methodology:'''<br />
One singer is assigned to each voice for each song. Every song has bass, tenor, alto, and soprano lines, and some lines have multiple voices per line. The group will sing together during the recording sessions, but each singer will have an individual microphone placed closely to his/her mouth to isolate his/her voice from the group. The goal is to allow singers to blend during the piece while still recording the voices separately for individual vocoding. <br />
<br />
The following singers have generously agreed to record vocals for my project:<br />
<br />
<br />
[[Ubi Caritas]]<br />
<br />
Bass: Joel Chapman<br />
<br />
Bass: Evan Gitterman<br />
<br />
Tenor: Andrew Forsyth<br />
<br />
Tenor: Will Watson<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Mia Farinelli<br />
<br />
<br />
[[Lux Aurumque]]<br />
<br />
Bass: Joel Chapman<br />
<br />
Bass: Evan Gitterman<br />
<br />
Tenor: Andrew Forsyth<br />
<br />
Tenor: Will Watson<br />
<br />
Alto: Laura Austin<br />
<br />
Alto: Michelle Jia<br />
<br />
Soprano: Mia Farinelli<br />
<br />
Soprano: Grace Laboy<br />
<br />
<br />
[[Lacrimosa]]<br />
<br />
Bass: Joel Chapman<br />
<br />
Tenor: Andrew Forsyth<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Grace Laboy<br />
<br />
Piano: Nayantara Jain<br />
<br />
<br />
'''Rehearsals and Testing:'''<br />
We held two rehearsals for each piece before recording. Both sessions went very swiftly because the singers sight-read each piece exceptionally well. During the rehearsals, I recorded some dry samples from Andrew Forsyth, the tenor voice. I had him sing phrases with and without vibrato. I also had him sing a short phrase (from Ubi Caritas) all on one note and then another take on another note. After vocoding each sample, I noticed the following:<br />
<br />
1) Vibrato creates a nice spectral modulation in the vocoded voice. The amplitude and high frequencies flutter and add a pleasant human quality to an otherwise robotic voice.<br />
<br />
2) When a vocoded note does not match the sung note from the dry sample, the formant of the vocoded voice is altered. Formant of the vocoded sample does not match that of the dry sample, and the vocoded sample sounds nasal as a result.<br />
<br />
3) Consonants are well defined by the vocoder. Therefore, it is important to sample experienced singers who know how to sing consonants in traditional choral pieces (rolled r's, short s's, etc…). <br />
<br />
<br />
'''Plan for Voice Vocoding:'''<br />
I will go through each vocal track and play each part's appropriate notes on the keyboard vocoder. This may be difficult and tedious, but I can use MIDI in Logic to edit/split/move notes according to the rhythm of the individual voices. Choosing good reverb will also be essential for blending and authenticity.<br />
<br />
'''Recording Process:'''<br />
CCRMA Professor Jonathan Abel has microphones that effectively isolate individual voices in a choral group. I spoke with him about using these mics and arranged a meeting with him in which he would give me the mics and run me through how to use them. This meeting was supposed to take place on the day of the recording session, but he did not show. As a consequence, we had to use a combination of condenser and dynamic mics in CCRMA studio and arrange them in a circle (in order to best isolate the individual voices). This setup was far from ideal, but we made due. All in all, the dry samples were satisfactory. However, some of the dry samples contained noticeable "bleeding," meaning that other voices spilled into the microphone that was intended to capture only one specific voice. <br />
<br />
<br />
'''Vocoding Process:'''<br />
The vocoding process itself went very swiftly. The singers' tone, vowels, and consonants were generally maintained in the vocoded product. For samples that had significant bleeding, I had to shift some formants (mainly down, to avoid nasal-sounding timbres) and adjust the formant stretch. <br />
<br />
<br />
'''Reverb:'''<br />
I knew all along that reverb was essential in simulating the timbre and dynamics of a real choral group. My first thought was to use impulse response in Logic Pro to create a customized reverb (perhaps using Stanford's Memorial Church as the venue). However, I decided to use an 8-input Faust reverb that I used in my 220B class with the help of Romain Michon. This reverb made most sense because 1) I had more control over the highs and lows and 2) the CCRMA stage has 8 inputs.</div>Favishttps://ccrma.stanford.edu/mediawiki/index.php?title=Favis220c&diff=16578Favis220c2014-06-09T19:53:01Z<p>Favis: </p>
<hr />
<div>'''Overview:'''<br />
For my 220C project, I am making vocoder renditions of choral music. The vocoder takes spectral qualities of the human voice and translates them to an instrument, creating what seems to be a "talking instrument" or "robot voice." Over the course of this project I aim to examine the effects of vibrato, pitch, vowels, and spectrum in the dry signal on the vocoded sound and discover how choral music translates to vocoded vocals. <br />
<br />
I will be vocoding three choral pieces: Maurice Duruflé's "Ubi Caritas," Wolfgang Mozart's "Lacrimosa," and Eric Whitacre's "Lux Aurumque."<br />
<br />
<br />
'''Vocoder:'''<br />
I made a basic vocoder in Max MSP, but I am not satisfied with the quality of sound and annunciation of vowels/consonants. Since the goal of the project is to examine the effect of dry parameters on vocoded sound, the origin/originality of the vocoder is not necessarily a priority to me. I may end up toying with an already-made vocoder in Logic for the purposes of efficiency, timbre, and sound quality. <br />
<br />
<br />
'''Methodology:'''<br />
One singer is assigned to each voice for each song. Every song has bass, tenor, alto, and soprano lines, and some lines have multiple voices per line. The group will sing together during the recording sessions, but each singer will have an individual microphone placed closely to his/her mouth to isolate his/her voice from the group. The goal is to allow singers to blend during the piece while still recording the voices separately for individual vocoding. <br />
<br />
The following singers have generously agreed to record vocals for my project:<br />
<br />
<br />
[[Ubi Caritas]]<br />
<br />
Bass: Joel Chapman<br />
<br />
Bass: Evan Gitterman<br />
<br />
Tenor: Andrew Forsyth<br />
<br />
Tenor: Will Watson<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Mia Farinelli<br />
<br />
<br />
[[Lux Aurumque]]<br />
<br />
Bass: Joel Chapman<br />
<br />
Bass: Evan Gitterman<br />
<br />
Tenor: Andrew Forsyth<br />
<br />
Tenor: Will Watson<br />
<br />
Alto: Laura Austin<br />
<br />
Alto: Michelle Jia<br />
<br />
Soprano: Mia Farinelli<br />
<br />
Soprano: Grace Laboy<br />
<br />
<br />
[[Lacrimosa]]<br />
<br />
Bass: Joel Chapman<br />
<br />
Tenor: Andrew Forsyth<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Grace Laboy<br />
<br />
Piano: Nayantara Jain<br />
<br />
<br />
'''Rehearsals and Testing:'''<br />
We held two rehearsals for each piece before recording. Both sessions went very swiftly because the singers sight-read each piece exceptionally well. During the rehearsals, I recorded some dry samples from Andrew Forsyth, the tenor voice. I had him sing phrases with and without vibrato. I also had him sing a short phrase (from Ubi Caritas) all on one note and then another take on another note. After vocoding each sample, I noticed the following:<br />
<br />
1) Vibrato creates a nice spectral modulation in the vocoded voice. The amplitude and high frequencies flutter and add a pleasant human quality to an otherwise robotic voice.<br />
<br />
2) When a vocoded note does not match the sung note from the dry sample, the formant of the vocoded voice is altered. Formant of the vocoded sample does not match that of the dry sample, and the vocoded sample sounds nasal as a result.<br />
<br />
3) Consonants are well defined by the vocoder. Therefore, it is important to sample experienced singers who know how to sing consonants in traditional choral pieces (rolled r's, short s's, etc…). <br />
<br />
<br />
'''Plan for Voice Vocoding:'''<br />
I will go through each vocal track and play each part's appropriate notes on the keyboard vocoder. This may be difficult and tedious, but I can use MIDI in Logic to edit/split/move notes according to the rhythm of the individual voices. Choosing good reverb will also be essential for blending and authenticity.<br />
<br />
'''Recording Process:'''<br />
CCRMA Professor Jonathan Abel has microphones that effectively isolate individual voices in a choral group. I spoke with him about using these mics and arranged a meeting with him in which he would give me the mics and run me through how to use them. This meeting was supposed to take place on the day of the recording session, but he did not show. As a consequence, we had to use a combination of condenser and dynamic mics in CCRMA studio and arrange them in a circle (in order to best isolate the individual voices). This setup was far from ideal, but we made due. All in all, the dry samples were satisfactory. However, some of the dry samples contained noticeable "bleeding," meaning that other voices spilled into the microphone that was intended to capture only one specific voice. <br />
<br />
<br />
''''Vocoding Process:''''<br />
The vocoding process itself went very swiftly. The singers' tone, vowels, and consonants were generally maintained in the vocoded product. For samples that had significant bleeding, I had to shift some formants (mainly down, to avoid nasal-sounding timbres) and adjust the formant stretch. <br />
<br />
<br />
''''Reverb:''''<br />
I knew all along that reverb was essential in simulating the timbre and dynamics of a real choral group. My first thought was to use impulse response in Logic Pro to create a customized reverb (perhaps using Stanford's Memorial Church as the venue). However, I decided to use an 8-input Faust reverb that I used in my 220B class with the help of Romain Michon. This reverb made most sense because 1) I had more control over the highs and lows and 2) the CCRMA stage has 8 inputs.</div>Favishttps://ccrma.stanford.edu/mediawiki/index.php?title=Favis220c&diff=16577Favis220c2014-06-09T19:52:15Z<p>Favis: </p>
<hr />
<div>'''Overview:'''<br />
For my 220C project, I am making vocoder renditions of choral music. The vocoder takes spectral qualities of the human voice and translates them to an instrument, creating what seems to be a "talking instrument" or "robot voice." Over the course of this project I aim to examine the effects of vibrato, pitch, vowels, and spectrum in the dry signal on the vocoded sound and discover how choral music translates to vocoded vocals. <br />
<br />
I will be vocoding three choral pieces: Maurice Duruflé's "Ubi Caritas," Wolfgang Mozart's "Lacrimosa," and Eric Whitacre's "Lux Aurumque."<br />
<br />
<br />
'''Vocoder:'''<br />
I made a basic vocoder in Max MSP, but I am not satisfied with the quality of sound and annunciation of vowels/consonants. Since the goal of the project is to examine the effect of dry parameters on vocoded sound, the origin/originality of the vocoder is not necessarily a priority to me. I may end up toying with an already-made vocoder in Logic for the purposes of efficiency, timbre, and sound quality. <br />
<br />
<br />
'''Methodology:'''<br />
One singer is assigned to each voice for each song. Every song has bass, tenor, alto, and soprano lines, and some lines have multiple voices per line. The group will sing together during the recording sessions, but each singer will have an individual microphone placed closely to his/her mouth to isolate his/her voice from the group. The goal is to allow singers to blend during the piece while still recording the voices separately for individual vocoding. <br />
<br />
The following singers have generously agreed to record vocals for my project:<br />
<br />
<br />
[[Ubi Caritas]]<br />
<br />
Bass: Joel Chapman<br />
<br />
Bass: Evan Gitterman<br />
<br />
Tenor: Andrew Forsyth<br />
<br />
Tenor: Will Watson<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Mia Farinelli<br />
<br />
<br />
[[Lux Aurumque]]<br />
<br />
Bass: Joel Chapman<br />
<br />
Bass: Evan Gitterman<br />
<br />
Tenor: Andrew Forsyth<br />
<br />
Tenor: Will Watson<br />
<br />
Alto: Laura Austin<br />
<br />
Alto: Michelle Jia<br />
<br />
Soprano: Mia Farinelli<br />
<br />
Soprano: Grace Laboy<br />
<br />
<br />
[[Lacrimosa]]<br />
<br />
Bass: Joel Chapman<br />
<br />
Tenor: Andrew Forsyth<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Grace Laboy<br />
<br />
Piano: Nayantara Jain<br />
<br />
<br />
'''Rehearsals and Testing:'''<br />
We held two rehearsals for each piece before recording. Both sessions went very swiftly because the singers sight-read each piece exceptionally well. During the rehearsals, I recorded some dry samples from Andrew Forsyth, the tenor voice. I had him sing phrases with and without vibrato. I also had him sing a short phrase (from Ubi Caritas) all on one note and then another take on another note. After vocoding each sample, I noticed the following:<br />
<br />
1) Vibrato creates a nice spectral modulation in the vocoded voice. The amplitude and high frequencies flutter and add a pleasant human quality to an otherwise robotic voice.<br />
<br />
2) When a vocoded note does not match the sung note from the dry sample, the formant of the vocoded voice is altered. Formant of the vocoded sample does not match that of the dry sample, and the vocoded sample sounds nasal as a result.<br />
<br />
3) Consonants are well defined by the vocoder. Therefore, it is important to sample experienced singers who know how to sing consonants in traditional choral pieces (rolled r's, short s's, etc…). <br />
<br />
<br />
'''Plan for Voice Vocoding:'''<br />
I will go through each vocal track and play each part's appropriate notes on the keyboard vocoder. This may be difficult and tedious, but I can use MIDI in Logic to edit/split/move notes according to the rhythm of the individual voices. Choosing good reverb will also be essential for blending and authenticity.<br />
<br />
'''Recording Process:'''<br />
CCRMA Professor Jonathan Abel has microphones that effectively isolate individual voices in a choral group. I spoke with him about using these mics and arranged a meeting with him in which he would give me the mics and run me through how to use them. This meeting was supposed to take place on the day of the recording session, but he did not show. As a consequence, we had to use a combination of condenser and dynamic mics in CCRMA studio and arrange them in a circle (in order to best isolate the individual voices). This setup was far from ideal, but we made due. All in all, the dry samples were satisfactory. However, some of the dry samples contained noticeable "bleeding," meaning that other voices spilled into the microphone that was intended to capture only one specific voice. <br />
<br />
<br />
""Vocoding Process:""<br />
The vocoding process itself went very swiftly. The singers' tone, vowels, and consonants were generally maintained in the vocoded product. For samples that had significant bleeding, I had to shift some formants (mainly down, to avoid nasal-sounding timbres) and adjust the formant stretch. <br />
<br />
""Reverb:""<br />
I knew all along that reverb was essential in simulating the timbre and dynamics of a real choral group. My first thought was to use impulse response in Logic Pro to create a customized reverb (perhaps using Stanford's Memorial Church as the venue). However, I decided to use an 8-input Faust reverb that I used in my 220B class with the help of Romain Michon. This reverb made most sense because 1) I had more control over the highs and lows and 2) the CCRMA stage has 8 inputs.</div>Favishttps://ccrma.stanford.edu/mediawiki/index.php?title=Favis220c&diff=16576Favis220c2014-06-09T19:41:14Z<p>Favis: </p>
<hr />
<div>'''Overview:'''<br />
For my 220C project, I am making vocoder renditions of choral music. The vocoder takes spectral qualities of the human voice and translates them to an instrument, creating what seems to be a "talking instrument" or "robot voice." Over the course of this project I aim to examine the effects of vibrato, pitch, vowels, and spectrum in the dry signal on the vocoded sound and discover how choral music translates to vocoded vocals. <br />
<br />
I will be vocoding three choral pieces: Maurice Duruflé's "Ubi Caritas," Wolfgang Mozart's "Lacrimosa," and Eric Whitacre's "Lux Aurumque."<br />
<br />
<br />
'''Vocoder:'''<br />
I made a basic vocoder in Max MSP, but I am not satisfied with the quality of sound and annunciation of vowels/consonants. Since the goal of the project is to examine the effect of dry parameters on vocoded sound, the origin/originality of the vocoder is not necessarily a priority to me. I may end up toying with an already-made vocoder in Logic for the purposes of efficiency, timbre, and sound quality. <br />
<br />
<br />
'''Methodology:'''<br />
One singer is assigned to each voice for each song. Every song has bass, tenor, alto, and soprano lines, and some lines have multiple voices per line. The group will sing together during the recording sessions, but each singer will have an individual microphone placed closely to his/her mouth to isolate his/her voice from the group. The goal is to allow singers to blend during the piece while still recording the voices separately for individual vocoding. <br />
<br />
The following singers have generously agreed to record vocals for my project:<br />
<br />
<br />
[[Ubi Caritas]]<br />
<br />
Bass: Joel Chapman<br />
<br />
Bass: Evan Gitterman<br />
<br />
Tenor: Andrew Forsyth<br />
<br />
Tenor: Will Watson<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Mia Farinelli<br />
<br />
<br />
[[Lux Aurumque]]<br />
<br />
Bass: Joel Chapman<br />
<br />
Bass: Evan Gitterman<br />
<br />
Tenor: Andrew Forsyth<br />
<br />
Tenor: Will Watson<br />
<br />
Alto: Laura Austin<br />
<br />
Alto: Michelle Jia<br />
<br />
Soprano: Mia Farinelli<br />
<br />
Soprano: Grace Laboy<br />
<br />
<br />
[[Lacrimosa]]<br />
<br />
Bass: Joel Chapman<br />
<br />
Tenor: Andrew Forsyth<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Grace Laboy<br />
<br />
Piano: Nayantara Jain<br />
<br />
<br />
'''Rehearsals and Testing:'''<br />
We held two rehearsals for each piece before recording. Both sessions went very swiftly because the singers sight-read each piece exceptionally well. During the rehearsals, I recorded some dry samples from Andrew Forsyth, the tenor voice. I had him sing phrases with and without vibrato. I also had him sing a short phrase (from Ubi Caritas) all on one note and then another take on another note. After vocoding each sample, I noticed the following:<br />
<br />
1) Vibrato creates a nice spectral modulation in the vocoded voice. The amplitude and high frequencies flutter and add a pleasant human quality to an otherwise robotic voice.<br />
<br />
2) When a vocoded note does not match the sung note from the dry sample, the formant of the vocoded voice is altered. Formant of the vocoded sample does not match that of the dry sample, and the vocoded sample sounds nasal as a result.<br />
<br />
3) Consonants are well defined by the vocoder. Therefore, it is important to sample experienced singers who know how to sing consonants in traditional choral pieces (rolled r's, short s's, etc…). <br />
<br />
<br />
'''Plan for Voice Vocoding:'''<br />
I will go through each vocal track and play each part's appropriate notes on the keyboard vocoder. This may be difficult and tedious, but I can use MIDI in Logic to edit/split/move notes according to the rhythm of the individual voices. Choosing good reverb will also be essential for blending and authenticity.<br />
<br />
'''Recording Process:'''<br />
CCRMA Professor Jonathan Abel has microphones that effectively isolate individual voices in a choral group. I spoke with him about using these mics and arranged a meeting with him in which he would give me the mics and run me through how to use them. This meeting was supposed to take place on the day of the recording session, but he did not show. As a consequence, we had to use a combination of condenser and dynamic mics in CCRMA studio and arrange them in a circle (in order to best isolate the individual voices). This setup was far from ideal, but we made due. All in all, the dry samples were satisfactory. However, some of the dry samples contained noticeable "bleeding," meaning that other voices spilled into the microphone that was intended to capture only one specific voice. <br />
<br />
""Vocoding Process:""<br />
The vocoding process itself went very swiftly. The singers' tone, vowels, and consonants were generally maintained in the vocoded product. For samples that had significant bleeding, I had to shift some formants (mainly down, to avoid nasal-sounding timbres) and adjust the formant stretch.</div>Favishttps://ccrma.stanford.edu/mediawiki/index.php?title=220c-spring-2014&diff=16575220c-spring-20142014-06-09T19:15:01Z<p>Favis: /* Project Description */</p>
<hr />
<div>= Music 220C: Course Wiki =<br />
<br />
This is a community wiki page maintained by Music 220C class.<br />
<br />
'''Note''': Register a link to your project blog in the section below.<br />
<br />
= Final Presentation Schedule =<br />
<br />
The final presentation for Music220c will take place on CCRMA stage on June 10, 2014 starting at 3:30pm. Please put your name in one of the available spots. As some people might need more time than others, this is just an order and each spot can be as long as you want (may be not more than 20 minutes though). Please, specify any technical need you might have for your presentation. If you plan to do your presentation in the listening room, please sign up for the latest available spot on the list (once the presentations on stage will be over, we'll move to the listening room).<br />
<br />
- Freddy Avis <br><br />
- Caleb Rau <br><br />
- Sean Nealon <br><br />
- Hana Shin <br><br />
- Shu Yu Lin <br><br />
- Victoria Grace <br><br />
- Joel Chapman <br><br />
- Andrew Forsyth <br><br />
- Madeline Huberth <br><br />
- Griffin Stoller <br><br />
- Fang Yi Lin <br><br />
- Cooper Newby <br><br />
&&&&& listening room &&&&& <br><br />
- Gio Jacuzzi<br><br />
- Byron Walker<br><br />
- Evan Gitterman <br><br />
<br />
= Project Description =<br />
<br />
Romain Michon (Example): http://google.com <br><br />
Erich Peske: https://ccrma.stanford.edu/wiki/Ambisonic_rhythms <br><br />
Hana Shin: https://ccrma.stanford.edu/~hanashin/220c/ <br><br />
Evan Gitterman: [[Dillafier]] <br><br />
Freddy Avis: https://ccrma.stanford.edu/wiki/Favis220c <br><br />
Andrew Forsyth: https://ccrma.stanford.edu/wiki/Voice_Pedal <br><br />
Joel Chapman: https://ccrma.stanford.edu/~joel/220c/tunings.html <br><br />
Gio Jacuzzi: https://ccrma.stanford.edu/~gjacuzzi/220c/index.html <br><br />
Shu Yu Lin: https://ccrma.stanford.edu/~sylin/220c/somewhereInBetween.html <br><br />
Alex Chechile: https://ccrma.stanford.edu/~chechile/220c/220csite.html<br><br />
Cooper Newby: http://www.coopernewby.com/tweeter-tracker <br><br />
Madeline Huberth: https://ccrma.stanford.edu/~mhuberth/220c/progress.html<br><br />
Byron Walker: https://ccrma.stanford.edu/~byron/220c/index.html<br><br />
Griffin Stoller: https://ccrma.stanford.edu/wiki/GriffinStollerProject <br><br />
Caleb Rau: https://ccrma.stanford.edu/user/c/crau/Library/Web/220c/crau220c.html <br><br />
Fang Yi Lin: [http://wiki.hikari-project.com/index.php/Project_EAST Project EAST] <br></div>Favishttps://ccrma.stanford.edu/mediawiki/index.php?title=Favis220c&diff=16553Favis220c2014-05-27T18:24:30Z<p>Favis: </p>
<hr />
<div>'''Overview:'''<br />
For my 220C project, I am making vocoder renditions of choral music. The vocoder takes spectral qualities of the human voice and translates them to an instrument, creating what seems to be a "talking instrument" or "robot voice." Over the course of this project I aim to examine the effects of vibrato, pitch, vowels, and spectrum in the dry signal on the vocoded sound and discover how choral music translates to vocoded vocals. <br />
<br />
I will be vocoding three choral pieces: Maurice Duruflé's "Ubi Caritas," Wolfgang Mozart's "Lacrimosa," and Eric Whitacre's "Lux Aurumque."<br />
<br />
<br />
'''Vocoder:'''<br />
I made a basic vocoder in Max MSP, but I am not satisfied with the quality of sound and annunciation of vowels/consonants. Since the goal of the project is to examine the effect of dry parameters on vocoded sound, the origin/originality of the vocoder is not necessarily a priority to me. I may end up toying with an already-made vocoder in Logic for the purposes of efficiency, timbre, and sound quality. <br />
<br />
<br />
'''Methodology:'''<br />
One singer is assigned to each voice for each song. Every song has bass, tenor, alto, and soprano lines, and some lines have multiple voices per line. The group will sing together during the recording sessions, but each singer will have an individual microphone placed closely to his/her mouth to isolate his/her voice from the group. The goal is to allow singers to blend during the piece while still recording the voices separately for individual vocoding. <br />
<br />
The following singers have generously agreed to record vocals for my project:<br />
<br />
<br />
[[Ubi Caritas]]<br />
<br />
Bass: Joel Chapman<br />
<br />
Bass: Evan Gitterman<br />
<br />
Tenor: Andrew Forsyth<br />
<br />
Tenor: Will Watson<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Mia Farinelli<br />
<br />
<br />
[[Lux Aurumque]]<br />
<br />
Bass: Joel Chapman<br />
<br />
Bass: Evan Gitterman<br />
<br />
Tenor: Andrew Forsyth<br />
<br />
Tenor: Will Watson<br />
<br />
Alto: Laura Austin<br />
<br />
Alto: Michelle Jia<br />
<br />
Soprano: Mia Farinelli<br />
<br />
Soprano: Grace Laboy<br />
<br />
<br />
[[Lacrimosa]]<br />
<br />
Bass: Joel Chapman<br />
<br />
Tenor: Andrew Forsyth<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Grace Laboy<br />
<br />
Piano: Nayantara Jain<br />
<br />
<br />
'''Rehearsals and Testing:'''<br />
We held two rehearsals for each piece before recording. Both sessions went very swiftly because the singers sight-read each piece exceptionally well. During the rehearsals, I recorded some dry samples from Andrew Forsyth, the tenor voice. I had him sing phrases with and without vibrato. I also had him sing a short phrase (from Ubi Caritas) all on one note and then another take on another note. After vocoding each sample, I noticed the following:<br />
<br />
1) Vibrato creates a nice spectral modulation in the vocoded voice. The amplitude and high frequencies flutter and add a pleasant human quality to an otherwise robotic voice.<br />
<br />
2) When a vocoded note does not match the sung note from the dry sample, the formant of the vocoded voice is altered. Formant of the vocoded sample does not match that of the dry sample, and the vocoded sample sounds nasal as a result.<br />
<br />
3) Consonants are well defined by the vocoder. Therefore, it is important to sample experienced singers who know how to sing consonants in traditional choral pieces (rolled r's, short s's, etc…). <br />
<br />
<br />
'''Plan for Voice Vocoding:'''<br />
I will go through each vocal track and play each part's appropriate notes on the keyboard vocoder. This may be difficult and tedious, but I can use MIDI in Logic to edit/split/move notes according to the rhythm of the individual voices. Choosing good reverb will also be essential for blending and authenticity.<br />
<br />
'''Voice Vocoding Process:'''</div>Favishttps://ccrma.stanford.edu/mediawiki/index.php?title=Favis220c&diff=16552Favis220c2014-05-27T18:23:50Z<p>Favis: </p>
<hr />
<div>'''Overview:'''<br />
For my 220C project, I am making vocoder renditions of choral music. The vocoder takes spectral qualities of the human voice and translates them to an instrument, creating what seems to be a "talking instrument" or "robot voice." Over the course of this project I aim to examine the effects of vibrato, pitch, vowels, and spectrum in the dry signal on the vocoded sound and discover how choral music translates to vocoded vocals. <br />
<br />
I will be vocoding three choral pieces: Maurice Duruflé's "Ubi Caritas," Wolfgang Mozart's "Lacrimosa," and Eric Whitacre's "Lux Aurumque."<br />
<br />
<br />
'''Vocoder:'''<br />
I made a basic vocoder in Max MSP, but I am not satisfied with the quality of sound and annunciation of vowels/consonants. Since the goal of the project is to examine the effect of dry parameters on vocoded sound, the origin/originality of the vocoder is not necessarily a priority to me. I may end up toying with an already-made vocoder in Logic for the purposes of efficiency, timbre, and sound quality. <br />
<br />
<br />
'''Methodology:'''<br />
One singer is assigned to each voice for each song. Every song has bass, tenor, alto, and soprano lines, and some lines have multiple voices per line. The group will sing together during the recording sessions, but each singer will have an individual microphone placed closely to his/her mouth to isolate his/her voice from the group. The goal is to allow singers to blend during the piece while still recording the voices separately for individual vocoding. <br />
<br />
The following singers have generously agreed to record vocals for my project:<br />
<br />
[[Ubi Caritas]]<br />
Bass: Joel Chapman<br />
Bass: Evan Gitterman<br />
Tenor: Andrew Forsyth<br />
Tenor: Will Watson<br />
Alto: Laura Austin<br />
Soprano: Mia Farinelli<br />
<br />
<br />
[[Lux Aurumque]]<br />
Bass: Joel Chapman<br />
Bass: Evan Gitterman<br />
Tenor: Andrew Forsyth<br />
Tenor: Will Watson<br />
Alto: Laura Austin<br />
Alto: Michelle Jia<br />
Soprano: Mia Farinelli<br />
Soprano: Grace Laboy<br />
<br />
[[Lacrimosa]]<br />
Bass: Joel Chapman<br />
Tenor: Andrew Forsyth<br />
Alto: Laura Austin<br />
Soprano: Grace Laboy<br />
Piano: Nayantara Jain<br />
<br />
'''Rehearsals and Testing:'''<br />
We held two rehearsals for each piece before recording. Both sessions went very swiftly because the singers sight-read each piece exceptionally well. During the rehearsals, I recorded some dry samples from Andrew Forsyth, the tenor voice. I had him sing phrases with and without vibrato. I also had him sing a short phrase (from Ubi Caritas) all on one note and then another take on another note. After vocoding each sample, I noticed the following:<br />
<br />
1) Vibrato creates a nice spectral modulation in the vocoded voice. The amplitude and high frequencies flutter and add a pleasant human quality to an otherwise robotic voice.<br />
<br />
2) When a vocoded note does not match the sung note from the dry sample, the formant of the vocoded voice is altered. Formant of the vocoded sample does not match that of the dry sample, and the vocoded sample sounds nasal as a result.<br />
<br />
3) Consonants are well defined by the vocoder. Therefore, it is important to sample experienced singers who know how to sing consonants in traditional choral pieces (rolled r's, short s's, etc…). <br />
<br />
'''Plan for Voice Vocoding:'''<br />
I will go through each vocal track and play each part's appropriate notes on the keyboard vocoder. This may be difficult and tedious, but I can use MIDI in Logic to edit/split/move notes according to the rhythm of the individual voices. Choosing good reverb will also be essential for blending and authenticity.<br />
<br />
'''Voice Vocoding Process:'''</div>Favishttps://ccrma.stanford.edu/mediawiki/index.php?title=Favis220c&diff=16551Favis220c2014-05-27T18:23:27Z<p>Favis: </p>
<hr />
<div>'''Overview:'''<br />
For my 220C project, I am making vocoder renditions of choral music. The vocoder takes spectral qualities of the human voice and translates them to an instrument, creating what seems to be a "talking instrument" or "robot voice." Over the course of this project I aim to examine the effects of vibrato, pitch, vowels, and spectrum in the dry signal on the vocoded sound and discover how choral music translates to vocoded vocals. <br />
<br />
I will be vocoding three choral pieces: Maurice Duruflé's "Ubi Caritas," Wolfgang Mozart's "Lacrimosa," and Eric Whitacre's "Lux Aurumque."<br />
<br />
<br />
'''Vocoder:'''<br />
I made a basic vocoder in Max MSP, but I am not satisfied with the quality of sound and annunciation of vowels/consonants. Since the goal of the project is to examine the effect of dry parameters on vocoded sound, the origin/originality of the vocoder is not necessarily a priority to me. I may end up toying with an already-made vocoder in Logic for the purposes of efficiency, timbre, and sound quality. <br />
<br />
<br />
'''Methodology:'''<br />
One singer is assigned to each voice for each song. Every song has bass, tenor, alto, and soprano lines, and some lines have multiple voices per line. The group will sing together during the recording sessions, but each singer will have an individual microphone placed closely to his/her mouth to isolate his/her voice from the group. The goal is to allow singers to blend during the piece while still recording the voices separately for individual vocoding. <br />
<br />
The following singers have generously agreed to record vocals for my project:<br />
<br />
[[Ubi Caritas]]<br />
Bass: Joel Chapman<br />
Bass: Evan Gitterman<br />
Tenor: Andrew Forsyth<br />
Tenor: Will Watson<br />
Alto: Laura Austin<br />
Soprano: Mia Farinelli<br />
<br />
<br />
[[Lux Aurumque]]<br />
Bass: Joel Chapman<br />
Bass: Evan Gitterman<br />
Tenor: Andrew Forsyth<br />
Tenor: Will Watson<br />
Alto: Laura Austin<br />
Alto: Michelle Jia<br />
Soprano: Mia Farinelli<br />
Soprano: Grace Laboy<br />
<br />
[[Lacrimosa]]<br />
Bass: Joel Chapman<br />
Tenor: Andrew Forsyth<br />
Alto: Laura Austin<br />
Soprano: Grace Laboy<br />
Piano: Nayantara Jain<br />
<br />
'''Rehearsals and Testing:'''<br />
We held two rehearsals for each piece before recording. Both sessions went very swiftly because the singers sight-read each piece exceptionally well. During the rehearsals, I recorded some dry samples from Andrew Forsyth, the tenor voice. I had him sing phrases with and without vibrato. I also had him sing a short phrase (from Ubi Caritas) all on one note and then another take on another note. After vocoding each sample, I noticed the following:<br />
1) Vibrato creates a nice spectral modulation in the vocoded voice. The amplitude and high frequencies flutter and add a pleasant human quality to an otherwise robotic voice.<br />
2) When a vocoded note does not match the sung note from the dry sample, the formant of the vocoded voice is altered. Formant of the vocoded sample does not match that of the dry sample, and the vocoded sample sounds nasal as a result.<br />
3) Consonants are well defined by the vocoder. Therefore, it is important to sample experienced singers who know how to sing consonants in traditional choral pieces (rolled r's, short s's, etc…). <br />
<br />
'''Plan for Voice Vocoding:'''<br />
I will go through each vocal track and play each part's appropriate notes on the keyboard vocoder. This may be difficult and tedious, but I can use MIDI in Logic to edit/split/move notes according to the rhythm of the individual voices. Choosing good reverb will also be essential for blending and authenticity.<br />
<br />
'''Voice Vocoding Process:'''</div>Favishttps://ccrma.stanford.edu/mediawiki/index.php?title=220c-spring-2014&diff=16548220c-spring-20142014-05-27T18:10:06Z<p>Favis: /* Final Presentation Schedule */</p>
<hr />
<div>= Music 220C: Course Wiki =<br />
<br />
This is a community wiki page maintained by Music 220C class.<br />
<br />
'''Note''': Register a link to your project blog in the section below.<br />
<br />
= Final Presentation Schedule =<br />
<br />
The final presentation for Music220c will take place on CCRMA stage on June 10, 2014 starting at 3:30pm. Please put your name in one of the available spots. As some people might need more time than others, this is just an order and each spot can be as long as you want (may be not more than 20 minutes though). Please, specify any technical need you might have for your presentation. If you plan to do your presentation in the listening room, please sign up for the latest available spot on the list (once the presentations on stage will be over, we'll move to the listening room).<br />
<br />
1: <br><br />
2: Freddy Avis <br><br />
3: <br><br />
4: <br><br />
5: <br><br />
6: <br><br />
7: <br><br />
8: <br><br />
9: <br><br />
10: <br><br />
11: <br><br />
<br />
= Project Description =<br />
<br />
Romain Michon (Example): http://google.com <br><br />
Erich Peske: https://ccrma.stanford.edu/wiki/Ambisonic_rhythms <br><br />
Hana Shin: https://ccrma.stanford.edu/~hanashin/220c/ <br><br />
Evan Gitterman: [[Dillafier]] <br><br />
Andrew Forsyth: https://ccrma.stanford.edu/wiki/Voice_Pedal <br><br />
Gio Jacuzzi: https://ccrma.stanford.edu/~gjacuzzi/220c/index.html <br><br />
Shu Yu Lin: https://ccrma.stanford.edu/~sylin/220c/somewhereInBetween.html <br><br />
Alex Chechile: https://ccrma.stanford.edu/~chechile/220c/220csite.html<br><br />
Cooper Newby: http://www.coopernewby.com/tweeter-tracker <br><br />
Madeline Huberth: https://ccrma.stanford.edu/~mhuberth/220c/progress.html<br><br />
Byron Walker: https://ccrma.stanford.edu/~byron/220c/index.html<br><br />
Griffin Stoller: https://ccrma.stanford.edu/wiki/GriffinStollerProject <br><br />
Caleb Rau: https://ccrma.stanford.edu/user/c/crau/Library/Web/220c/crau220c.html <br><br />
Fang Yi Lin: [http://wiki.hikari-project.com/index.php/Project_EAST Project EAST] <br></div>Favishttps://ccrma.stanford.edu/mediawiki/index.php?title=220c-spring-2014&diff=16547220c-spring-20142014-05-27T18:09:50Z<p>Favis: </p>
<hr />
<div>= Music 220C: Course Wiki =<br />
<br />
This is a community wiki page maintained by Music 220C class.<br />
<br />
'''Note''': Register a link to your project blog in the section below.<br />
<br />
= Final Presentation Schedule =<br />
<br />
The final presentation for Music220c will take place on CCRMA stage on June 10, 2014 starting at 3:30pm. Please put your name in one of the available spots. As some people might need more time than others, this is just an order and each spot can be as long as you want (may be not more than 20 minutes though). Please, specify any technical need you might have for your presentation. If you plan to do your presentation in the listening room, please sign up for the latest available spot on the list (once the presentations on stage will be over, we'll move to the listening room).<br />
<br />
1: <br><br />
2: Freddy Avis<br />
3: <br><br />
4: <br><br />
5: <br><br />
6: <br><br />
7: <br><br />
8: <br><br />
9: <br><br />
10: <br><br />
11: <br><br />
<br />
= Project Description =<br />
<br />
Romain Michon (Example): http://google.com <br><br />
Erich Peske: https://ccrma.stanford.edu/wiki/Ambisonic_rhythms <br><br />
Hana Shin: https://ccrma.stanford.edu/~hanashin/220c/ <br><br />
Evan Gitterman: [[Dillafier]] <br><br />
Andrew Forsyth: https://ccrma.stanford.edu/wiki/Voice_Pedal <br><br />
Gio Jacuzzi: https://ccrma.stanford.edu/~gjacuzzi/220c/index.html <br><br />
Shu Yu Lin: https://ccrma.stanford.edu/~sylin/220c/somewhereInBetween.html <br><br />
Alex Chechile: https://ccrma.stanford.edu/~chechile/220c/220csite.html<br><br />
Cooper Newby: http://www.coopernewby.com/tweeter-tracker <br><br />
Madeline Huberth: https://ccrma.stanford.edu/~mhuberth/220c/progress.html<br><br />
Byron Walker: https://ccrma.stanford.edu/~byron/220c/index.html<br><br />
Griffin Stoller: https://ccrma.stanford.edu/wiki/GriffinStollerProject <br><br />
Caleb Rau: https://ccrma.stanford.edu/user/c/crau/Library/Web/220c/crau220c.html <br><br />
Fang Yi Lin: [http://wiki.hikari-project.com/index.php/Project_EAST Project EAST] <br></div>Favishttps://ccrma.stanford.edu/mediawiki/index.php?title=Favis220c&diff=16496Favis220c2014-04-29T17:55:11Z<p>Favis: </p>
<hr />
<div>'''Overview:'''<br />
For my 220C project, I am making vocoder renditions of choral music. The vocoder takes spectral qualities of the human voice and translates them to an instrument, creating what seems to be a "talking instrument" or "robot voice." Over the course of this project I aim to examine the effects of vibrato, pitch, vowels, and spectrum in the dry signal on the vocoded sound and discover how choral music translates to vocoded vocals. <br />
<br />
I will be vocoding three choral pieces: Maurice Duruflé's "Ubi Caritas," Wolfgang Mozart's "Lacrimosa," and Eric Whitacre's "Lux Aurumque."<br />
<br />
<br />
'''Vocoder:'''<br />
I made a basic vocoder in Max MSP, but I am not satisfied with the quality of sound and annunciation of vowels/consonants. Since the goal of the project is to examine the effect of dry parameters on vocoded sound, the origin/originality of the vocoder is not necessarily a priority to me. I may end up toying with an already-made vocoder in Logic for the purposes of efficiency, timbre, and sound quality. <br />
<br />
<br />
'''Methodology:'''<br />
One singer is assigned to each voice for each song. Every song has bass, tenor, alto, and soprano lines, and some lines have multiple voices per line. The group will sing together during the recording sessions, but each singer will have an individual microphone placed closely to his/her mouth to isolate his/her voice from the group. The goal is to allow singers to blend during the piece while still recording the voices separately for individual vocoding. <br />
<br />
The following singers have generously agreed to record vocals for my project:<br />
<br />
[[Ubi Caritas]]<br />
<br />
Bass: Joel<br />
<br />
Bass 2: ?<br />
<br />
Tenor: ?<br />
<br />
Tenor 2: Andrew Forsyth<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Mia Farinelli<br />
<br />
<br />
[[Lux Aurumque]]<br />
<br />
Bass: Joel<br />
<br />
Bass 2: Evan Gitterman?<br />
<br />
Tenor: ?<br />
<br />
Tenor 2: Andrew Forsyth<br />
<br />
Alto: ?<br />
<br />
Alto 2: Laura Austin<br />
<br />
Soprano: ?<br />
<br />
Soprano 2: Mia Farinelli<br />
<br />
<br />
[[Lacrimosa]]<br />
<br />
Bass: Joel<br />
<br />
Tenor: Andrew<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Grace Laboy<br />
<br />
Piano: ?<br />
<br />
<br />
'''Voice Vocoding:'''<br />
I will go through each vocal track and play each part's appropriate notes on the keyboard vocoder. This may be difficult and tedious, but I can use MIDI in Logic to edit/split/move notes according to the rhythm of the individual voices. Choosing good reverb will also be essential for blending and authenticity.</div>Favishttps://ccrma.stanford.edu/mediawiki/index.php?title=Favis220c&diff=16495Favis220c2014-04-29T17:54:55Z<p>Favis: </p>
<hr />
<div>'''Overview:'''<br />
For my 220C project, I am making vocoder renditions of choral music. The vocoder takes spectral qualities of the human voice and translates them to an instrument, creating what seems to be a "talking instrument" or "robot voice." Over the course of this project I aim to examine the effects of vibrato, pitch, vowels, and spectrum in the dry signal on the vocoded sound and discover how choral music translates to vocoded vocals. <br />
<br />
I will be vocoding three choral pieces: Maurice Duruflé's "Ubi Caritas," Wolfgang Mozart's "Lacrimosa," and Eric Whitacre's "Lux Aurumque."<br />
<br />
<br />
'''Vocoder:'''<br />
I made a basic vocoder in Max MSP, but I am not satisfied with the quality of sound and annunciation of vowels/consonants. Since the goal of the project is to examine the effect of dry parameters on vocoded sound, the origin/originality of the vocoder is not necessarily a priority to me. I may end up toying with an already-made vocoder in Logic for the purposes of efficiency, timbre, and sound quality. <br />
<br />
<br />
'''Methodology:'''<br />
One singer is assigned to each voice for each song. Every song has bass, tenor, alto, and soprano lines, and some lines have multiple voices per line. The group will sing together during the recording sessions, but each singer will have an individual microphone placed closely to his/her mouth to isolate his/her voice from the group. The goal is to allow singers to blend during the piece while still recording the voices separately for individual vocoding. <br />
<br />
The following singers have generously agreed to record vocals for my project:<br />
<br />
[[Ubi Caritas]]<br />
<br />
Bass: Joel<br />
<br />
Bass 2: ?<br />
<br />
Tenor: ?<br />
<br />
Tenor 2: Andrew Forsyth<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Mia Farinelli<br />
<br />
<br />
[[Lux Aurumque]]<br />
<br />
Bass: Joel<br />
<br />
Bass 2: Evan Gitterman?<br />
<br />
Tenor: ?<br />
<br />
Tenor 2: Andrew Forsyth<br />
<br />
Alto: ?<br />
<br />
Alto 2: Laura Austin<br />
<br />
Soprano: ?<br />
<br />
Soprano 2: Mia Farinelli<br />
<br />
<br />
[[Lacrimosa]]<br />
<br />
Bass: Joel<br />
<br />
Tenor: Andrew<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Grace Laboy<br />
<br />
<br />
'''Voice Vocoding:'''<br />
I will go through each vocal track and play each part's appropriate notes on the keyboard vocoder. This may be difficult and tedious, but I can use MIDI in Logic to edit/split/move notes according to the rhythm of the individual voices. Choosing good reverb will also be essential for blending and authenticity.</div>Favishttps://ccrma.stanford.edu/mediawiki/index.php?title=Favis220c&diff=16494Favis220c2014-04-29T17:54:14Z<p>Favis: </p>
<hr />
<div>'''Overview:'''<br />
For my 220C project, I am making vocoder renditions of choral music. The vocoder takes spectral qualities of the human voice and translates them to an instrument, creating what seems to be a "talking instrument" or "robot voice." Over the course of this project I aim to examine the effects of vibrato, pitch, vowels, and spectrum in the dry signal on the vocoded sound and discover how choral music translates to vocoded vocals. <br />
<br />
I will be vocoding three choral pieces: Maurice Duruflé's "Ubi Caritas," Wolfgang Mozart's "Lacrimosa," and Eric Whitacre's "Lux Aurumque."<br />
<br />
<br />
'''Vocoder:'''<br />
I made a basic vocoder in Max MSP, but I am not satisfied with the quality of sound and annunciation of vowels/consonants. Since the goal of the project is to examine the effect of dry parameters on vocoded sound, the origin/originality of the vocoder is not necessarily a priority to me. I may end up toying with an already-made vocoder in Logic for the purposes of efficiency, timbre, and sound quality. <br />
<br />
<br />
'''Methodology:'''<br />
One singer is assigned to each voice for each song. Every song has bass, tenor, alto, and soprano lines, and some lines have multiple voices per line. The group will sing together during the recording sessions, but I each singer will have an individual microphone placed closely to his/her mouth to isolate his/her voice from the group. The goal is to allow singers to blend during the piece while still recording the voices separately for individual vocoding. <br />
<br />
The following singers have generously agreed to record vocals for my project:<br />
<br />
[[Ubi Caritas]]<br />
<br />
Bass: Joel<br />
<br />
Bass 2: ?<br />
<br />
Tenor: ?<br />
<br />
Tenor 2: Andrew Forsyth<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Mia Farinelli<br />
<br />
<br />
[[Lux Aurumque]]<br />
<br />
Bass: Joel<br />
<br />
Bass 2: Evan Gitterman?<br />
<br />
Tenor: ?<br />
<br />
Tenor 2: Andrew Forsyth<br />
<br />
Alto: ?<br />
<br />
Alto 2: Laura Austin<br />
<br />
Soprano: ?<br />
<br />
Soprano 2: Mia Farinelli<br />
<br />
<br />
[[Lacrimosa]]<br />
<br />
Bass: Joel<br />
<br />
Tenor: Andrew<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Grace Laboy<br />
<br />
<br />
'''Voice Vocoding:'''<br />
I will go through each vocal track and play each part's appropriate notes on the keyboard vocoder. This may be difficult and tedious, but I can use MIDI in Logic to edit/split/move notes according to the rhythm of the individual voices. Choosing good reverb will also be essential for blending and authenticity.</div>Favishttps://ccrma.stanford.edu/mediawiki/index.php?title=Favis220c&diff=16493Favis220c2014-04-29T17:53:56Z<p>Favis: </p>
<hr />
<div>'''Overview:'''<br />
For my 220C project, I am making vocoder renditions of choral music. The vocoder takes spectral qualities of the human voice and translates them to an instrument, creating what seems to be a "talking instrument" or "robot voice." Over the course of this project I aim to examine the effects of vibrato, pitch, vowels, and spectrum in the dry signal on the vocoded sound and discover how choral music translates to vocoded vocals. <br />
<br />
I will be vocoding three choral pieces: Maurice Duruflé's "Ubi Caritas," Wolfgang Mozart's "Lacrimosa," and Eric Whitacre's "Lux Aurumque."<br />
<br />
<br />
'''Vocoder:'''<br />
I made a basic vocoder in Max MSP, but I am not satisfied with the quality of sound and annunciation of vowels/consonants. Since the goal of the project is to examine the effect of dry parameters on vocoded sound, the origin/originality of the vocoder is not necessarily a priority to me. I may end up toying with an already-made vocoder in Logic for the purposes of efficiency, timbre, and sound quality. <br />
<br />
<br />
'''Methodology:'''<br />
One singer is assigned to each voice for each song. Every song has bass, tenor, alto, and soprano lines, and some lines have multiple voices per line. The group will sing together during the recording sessions, but I each singer will have an individual microphone placed closely to his/her mouth to isolate his/her voice from the group. The goal is to allow singers to blend during the piece while still recording the voices separately for individual vocoding. <br />
<br />
The following singers have generously agreed to record vocals for my project:<br />
<br />
[[Ubi Caritas]]<br />
<br />
Bass: Joel<br />
<br />
Bass 2: ?<br />
<br />
Tenor: ?<br />
<br />
Tenor 2: Andrew Forsyth<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Mia Farinelli<br />
<br />
<br />
[[Lux Aurumque]]<br />
Bass: Joel<br />
<br />
Bass 2: Evan Gitterman?<br />
<br />
Tenor: ?<br />
<br />
Tenor 2: Andrew Forsyth<br />
<br />
Alto: ?<br />
<br />
Alto 2: Laura Austin<br />
<br />
Soprano: ?<br />
<br />
Soprano 2: Mia Farinelli<br />
<br />
<br />
[[Lacrimosa]]<br />
<br />
Bass: Joel<br />
<br />
Tenor: Andrew<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Grace Laboy<br />
<br />
<br />
'''Voice Vocoding:'''<br />
I will go through each vocal track and play each part's appropriate notes on the keyboard vocoder. This may be difficult and tedious, but I can use MIDI in Logic to edit/split/move notes according to the rhythm of the individual voices. Choosing good reverb will also be essential for blending and authenticity.</div>Favishttps://ccrma.stanford.edu/mediawiki/index.php?title=Favis220c&diff=16492Favis220c2014-04-29T17:52:47Z<p>Favis: </p>
<hr />
<div>'''Overview:'''<br />
For my 220C project, I am making vocoder renditions of choral music. The vocoder takes spectral qualities of the human voice and translates them to an instrument, creating what seems to be a "talking instrument" or "robot voice." Over the course of this project I aim to examine the effects of vibrato, pitch, vowels, and spectrum in the dry signal on the vocoded sound and discover how choral music translates to vocoded vocals. <br />
<br />
I will be vocoding three choral pieces: Maurice Duruflé's "Ubi Caritas," Wolfgang Mozart's "Lacrimosa," and Eric Whitacre's "Lux Aurumque."<br />
<br />
<br />
'''Vocoder:'''<br />
I made a basic vocoder in Max MSP, but I am not satisfied with the quality of sound and annunciation of vowels/consonants. Since the goal of the project is to examine the effect of dry parameters on vocoded sound, the origin/originality of the vocoder is not necessarily a priority to me. I may end up toying with an already-made vocoder in Logic for the purposes of efficiency, timbre, and sound quality. <br />
<br />
<br />
'''Methodology:'''<br />
One singer is assigned to each voice for each song. Every song has bass, tenor, alto, and soprano lines, and some lines have multiple voices per line. The group will sing together during the recording sessions, but I each singer will have an individual microphone placed closely to his/her mouth to isolate his/her voice from the group. The goal is to allow singers to blend during the piece while still recording the voices separately for individual vocoding. <br />
<br />
The following singers have generously agreed to record vocals for my project:<br />
<br />
Ubi Caritas<br />
<br />
Bass: Joel<br />
<br />
Bass 2: ?<br />
<br />
Tenor: ?<br />
<br />
Tenor 2: Andrew Forsyth<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Mia Farinelli<br />
<br />
<br />
Lux Aurumque<br />
<br />
Bass: Joel<br />
<br />
Bass 2: Evan Gitterman?<br />
<br />
Tenor: ?<br />
<br />
Tenor 2: Andrew Forsyth<br />
<br />
Alto: ?<br />
<br />
Alto 2: Laura Austin<br />
<br />
Soprano: ?<br />
<br />
Soprano 2: Mia Farinelli<br />
<br />
<br />
Lacrimosa<br />
<br />
Bass: Joel<br />
<br />
Tenor: Andrew<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Grace Laboy<br />
<br />
<br />
'''Voice Vocoding:'''<br />
I will go through each vocal track and play each part's appropriate notes on the keyboard vocoder. This may be difficult and tedious, but I can use MIDI in Logic to edit/split/move notes according to the rhythm of the individual voices. Choosing good reverb will also be essential for blending and authenticity.</div>Favishttps://ccrma.stanford.edu/mediawiki/index.php?title=My220cProject&diff=16484My220cProject2014-04-29T17:17:48Z<p>Favis: </p>
<hr />
<div>Madeline Huberth - [https://ccrma.stanford.edu/~mhuberth/220c/progress.html]<br><br />
Alex Chechile - [https://ccrma.stanford.edu/~chechile/220c/220csite.html]<br><br />
Byron Walker - [https://ccrma.stanford.edu/~byron/220c/index.html]<br><br />
Shu Yu Lin - [https://ccrma.stanford.edu/~sylin/220c/somewhereInBetween.html]<br><br />
Freddy Avis - [https://ccrma.stanford.edu/wiki/Favis220c]<br></div>Favishttps://ccrma.stanford.edu/mediawiki/index.php?title=Favis220c&diff=16452Favis220c2014-04-29T05:14:54Z<p>Favis: </p>
<hr />
<div>'''Overview:'''<br />
For my 220C project, I am making vocoder renditions of choral music. The vocoder takes spectral qualities of the human voice and translates them to an instrument, creating what seems to be a "talking instrument" or "robot voice." Over the course of this project I aim to examine the effects of vibrato, pitch, vowels, and spectrum in the dry signal on the vocoded sound and discover how choral music translates to vocoded vocals. <br />
<br />
I will be vocoding three choral pieces: Maurice Duruflé's "Ubi Caritas," Wolfgang Mozart's "Lacrimosa," and Eric Whitacre's "Lux Aurumque."<br />
<br />
<br />
'''Vocoder:'''<br />
I made a basic vocoder in Max MSP, but I am not satisfied with the quality of sound and annunciation of vowels/consonants. Since the goal of the project is to examine the effect of dry parameters on the vocoded sound, the originality of the vocoder is not a priority to me. I may end up toying with an already-made vocoder in Logic for the purposes of efficiency, timbre, and sound quality. <br />
<br />
<br />
'''Methodology:'''<br />
One singer is assigned to each voice for each song. Every song has bass, tenor, alto, and soprano lines, and some lines have multiple voices per line. The group will sing together during the recording sessions, but I each singer will have an individual microphone placed closely to his/her mouth to isolate his/her voice from the group. The goal is to allow singers to blend during the piece while still recording the voices separately for individual vocoding. <br />
<br />
The following singers have generously agreed to record vocals for my project:<br />
<br />
Ubi Caritas<br />
<br />
Bass: Joel<br />
<br />
Bass 2: ?<br />
<br />
Tenor: ?<br />
<br />
Tenor 2: Andrew Forsyth<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Mia Farinelli<br />
<br />
<br />
Lux Aurumque<br />
<br />
Bass: Joel<br />
<br />
Bass 2: Evan Gitterman?<br />
<br />
Tenor: ?<br />
<br />
Tenor 2: Andrew Forsyth<br />
<br />
Alto: ?<br />
<br />
Alto 2: Laura Austin<br />
<br />
Soprano: ?<br />
<br />
Soprano 2: Mia Farinelli<br />
<br />
<br />
Lacrimosa<br />
<br />
Bass: Joel<br />
<br />
Tenor: Andrew<br />
<br />
Alto: Laura Austin<br />
<br />
Soprano: Grace Laboy<br />
<br />
<br />
'''Voice Vocoding:'''<br />
I will go through each vocal track and play each part's appropriate notes on the keyboard vocoder. This may be difficult and tedious, but I can use MIDI in Logic to edit/split/move notes according to the rhythm of the individual voices. Choosing good reverb will also be essential for blending and authenticity.</div>Favishttps://ccrma.stanford.edu/mediawiki/index.php?title=Favis220c&diff=16451Favis220c2014-04-29T05:13:24Z<p>Favis: Created page with 'Overview: For my 220C project, I am making vocoder renditions of choral music. The vocoder takes spectral qualities of the human voice and translates them to an instrument, creat…'</p>
<hr />
<div>Overview:<br />
For my 220C project, I am making vocoder renditions of choral music. The vocoder takes spectral qualities of the human voice and translates them to an instrument, creating what seems to be a "talking instrument" or "robot voice." Over the course of this project I aim to examine the effects of vibrato, pitch, vowels, and spectrum in the dry signal on the vocoded sound and discover how choral music translates to vocoded vocals. <br />
<br />
I will be vocoding three choral pieces: Maurice Duruflé's "Ubi Caritas," Wolfgang Mozart's "Lacrimosa," and Eric Whitacre's "Lux Aurumque."<br />
<br />
Vocoder:<br />
I made a basic vocoder in Max MSP, but I am not satisfied with the quality of sound and annunciation of vowels/consonants. Since the goal of the project is to examine the effect of dry parameters on the vocoded sound, the originality of the vocoder is not a priority to me. I may end up toying with an already-made vocoder in Logic for the purposes of efficiency, timbre, and sound quality. <br />
<br />
Methodology:<br />
One singer is assigned to each voice for each song. Every song has bass, tenor, alto, and soprano lines, and some lines have multiple voices per line. The group will sing together during the recording sessions, but I each singer will have an individual microphone placed closely to his/her mouth to isolate his/her voice from the group. The goal is to allow singers to blend during the piece while still recording the voices separately for individual vocoding. <br />
<br />
The following singers have generously agreed to record vocals for my project:<br />
<br />
Ubi Caritas<br />
Bass: Joel<br />
Bass 2: ?<br />
Tenor: ?<br />
Tenor 2: Andrew Forsyth<br />
Alto: Laura Austin<br />
Soprano: Mia Farinelli<br />
<br />
Lux Aurumque<br />
Bass: Joel<br />
Bass 2: Evan Gitterman?<br />
Tenor: ?<br />
Tenor 2: Andrew Forsyth<br />
Alto: ?<br />
Alto 2: Laura Austin<br />
Soprano: ?<br />
Soprano 2: Mia Farinelli<br />
<br />
Lacrimosa<br />
Bass: Joel<br />
Tenor: Andrew<br />
Alto: Laura Austin<br />
Soprano: Grace Laboy<br />
<br />
Voice Vocoding:<br />
I will go through each vocal track and play each part's appropriate notes on the keyboard vocoder. This may be difficult and tedious, but I can use MIDI in Logic to edit/split/move notes according to the rhythm of the individual voices. Choosing good reverb will also be essential for blending and authenticity.</div>Favishttps://ccrma.stanford.edu/mediawiki/index.php?title=220a-fall-2014&diff=15405220a-fall-20142013-10-03T17:14:12Z<p>Favis: </p>
<hr />
<div>Welcome on the Music 220a wiki!<br />
<br />
= Music Presentation Sign Up =<br />
<br />
Please use your full name as appeared on the class list.<br />
<br />
* 09/26 (THU) : 3 volunteers :)<br />
* 10/01 (TUE) : Chet Gnegy, Madeline Huberth, Holly Jachowski<br />
* 10/03 (THU) : Elliot Kermit-Canfield, Graham Davis, Clark Pang<br />
* 10/08 (TUE) : Victoria Grace, Alex Ramsey, Alex Chechile<br />
* 10/10 (THU) : Ezra Crowley, Freddy Avis<br />
* 10/15 (TUE) : Nette Worthey<br />
* 10/17 (THU) :<br />
* 10/22 (TUE) :<br />
* 10/24 (THU) :<br />
* 10/29 (TUE) :<br />
* 10/31 (THU) :<br />
* 11/05 (TUE) :<br />
<br />
= Tutorial Session Sept 26: HW1, simplest Chuck code & UNIX =<br />
<br />
Quick UNIX tutorial: http://freeengineer.org/learnUNIXin10minutes.html <br />
<br />
Download Chuck: http://chuck.cs.princeton.edu/release/<br />
<br />
Download MiniAudicle: http://audicle.cs.princeton.edu/mini/ <br />
<br />
<pre><br />
//TriOsc, SqrOsc, etc<br />
SinOsc a => dac; <br />
//a => dac.right;<br />
//a => dac.left;<br />
<br />
440 => a.freq;<br />
0.9 => a.gain;<br />
<br />
while(true){<br />
//<<< Std.rand2f( 100.0, 1000.0 ) >>>;<br />
Std.rand2f( 100.0, 1000.0 ) => a.freq;<br />
100::ms => now;<br />
}<br />
</pre></div>Favis