Field Recording, Composition and Performance for Ecology Conservation in Patagonia, Chile

Research Project Report

Note: This is a detailed report of the project. The report included how I made this project happend from the start to the end.

Shu Yu Lin

June 8, 2015

Project Title

Field Recording, Composition and Performance for Ecology Conservation in Patagonia, Chile


March 16- June 14, 2015


Incorporate the sound of magellanic woodpeckers to create a piece of music and to assist field research by video and sound recording the magellanic woodpeckers in order to understand its language as a part of contribution to the ecology conservation in Patagonia, Chile.

The project “Field recording, composition and performance for ecology Conservation in Patagonia, Chile”, entailed me to travel to the Patagonia region of Chile to sound record and video record birds especially the magellanic woodpecker in the region. Also, I was set out to compose a piece of music that incorporate these recordings and perform the piece. This project was completed with assistance from the Center for Latin American Studies, Sub-Antarctic Biocultural Conservation Program in Chile, professor Jaime Jimenez and researcher Amy Wynia from the University of North Texas (UNT). At Stanford, professor Takako Fujioka, professor Fernando Lopez-Lezcano and colleague Iran Roman from the Center for Computer Research in Music and Acoustics (CCRMA) and composer Alexandra Hay supported to accomplish this project. Other researchers at Puerto Williams, Chile and CCRMA lend hand in this project. Of all these supports, the Center for Latin American Studies provided a Field Research Travel Grant of USD 1000. It was used solely to purchase tickets to Puerto Williams, Chile. It covered almost half of the cost of the round trip air fare. Professor Jaime Jimenez was the main contact person during the process of finalizing the trip and project entailment, and arranging the stay in the field research station in Chile. Moreover, Professor Jimenez and Amy Wynia guided me in the forest during field recording. From the Stanford end, I was motivated by Professor Takako Fujioka's enthusiasm for this project. And many thanks to timely advises from Iran Raman and colleagues. The patience and technical assistance from Professor Fernando Lopez-Lezcano and artistic guidance from composer Alexandra Hay made the composing part of this project possible. Without these assistance, I would not be able to complete this rewarding journey.

The trip took place between March 16 and March 30. The arrival date was March 18 and departure date was March 27 with 2 days of transit flights going to Puerto Williams and 3 days of transit returning to Stanford. The duration of stay at Punta Arenas during transit depended on the availability of the flights that flew between Punta Arenas and Puerto Williams and the local weather. In my case, I needed to stay 1 night at Punta Arenas when going to Puerto Williams and 2 nights from Punta Arenas coming back to San Francisco. Therefore, I ended up staying in Puerto Williams for 10 days. Of these 10 days, not counting the day of arrival, the day of departure, a day with unfavorable weather and a day that I experienced physical fatigue, I worked for 6 days. During these days, with the assistance of Professor Jaime Jimenez and Amy Wynia, both who knew the specific locations to go to find birds, I was able to obtain substantial recordings. After I returned to Stanford, I edited the raw recordings and composed the piece, Coexistence5, for its premier that was held in the Stanford Bing Studio 7:30pm on May 20, 2015. By performing this piece during the concert, I was guaranteed of having a body of audience. Thus, this met my expectation of spreading the idea of ecology conservation to the public through music. Overall, the process of carrying out the project was rewarding.

Even though the project met its goals, there were regrets. After several contacts with scientists who works for the Sub-Antarctic Biocultural Conservation Program, and taking in the consideration of time and budget, the destination was finalized to the city of Puerto Williams which is located on the Navarino Island of Chile. Navarino Island was the only place that I obtain recordings in the Magallanes region. Furthermore, video record part of the project had to be discarded. The reason to discard this part was not only that the tight budget could not allow me to purchase or rent video equipment for field video recording, but also the difficulty to obtain usable video record during my short period of visit. The season during that time was not the best to record since migratory birds have almost moved to other locations. Moreover, I was not able to accomplish the goal of understand the communication of the magellanic woodpecker (Campephilus magellanicus) within the timeline. Besides these regrets, the other purposes for this project was realized.

During my visit to the Navarino Island, I went to various locations to carry out field sound recording. They were Omora Park, seashore of the city of Puerto Williams, Cerro La Bandera, east side of the Navarino Island and Península Zañartu. Of these locations, Omora Park is an ethnobotanical park that is run by the Puerto Williams University Center (Centro Universitario Puerto Williams) which strives to investigate the biosphere of the Cape Horn (Cabo de Hornos) region and conserve the sub-antarctic ecosystems. The center collaborates with the Sub-Antarctic Biocultural Conservation Program that I have been working with. The program is supported by the Institute of Ecology and Biodiversity (Instituto de Ecología y Biodiversidad) and the University of North Texas. Scientists who works for Sub-Antarctic Biocultural Conservation Program visit the Omora Park to carry out investigations. The park located at the west of Puerto Williams, and west to the start of the mountainous region of Cerro La Bandera. Cerro La Bandera begins approximately 3 kilometers away from the city of Puerto Wililiams. North from Cerro La Bandera is the Península Zañartu, which is located between Omora Park and Puerto Williams, and extends into the Beagle canal. Many sea birds reside by the sea sides of the peninsula. A part of the east side of the Navarino Island is used as dump site. For the other portion of the region, scientists were able to find families of the magellanic woodpecker. Going to these regions on the Navarino Island, I experienced the nature.

Professor Jimenez and Amy assisted me in the process of recording by imitate bird calls and each day I recorded a few tracks. There were two main methods that they applied. One was double knock. This method was used to locate the magellanic woodpecker, and it required a person holding two wooden sticks and hit a special designed wooden resonance box twice concurrently. The resulting sound mimicked the signature double knock of the woodpecker and the sound echos in the forest. By doing this, distant woodpeckers heard the sound and knock on the tree trunk twice simultaneously in response. It was during this process that the researchers were able to find a family of magellanic woodpecker and I was able to record the calls of juveniles and the knocking of the adult woodpeckers during their feeding. Another method was to simulate the sounds of predictors of a bird specie. By producing similar calls of their predictors, birds made calls to the same neighboring species. Eventually, they gathered around the sound source and try to figure out what to do by making sound. Therefore, with a Zoom H4N portable recorder, I recorded these calls from a very short distance away from the birds. Zoom H4N portable recorder and its accessories, for example, a wind shield, were the recording instruments that I used during the recordings. It is suitable for outdoor recordings and the quality of the resulting recordings were good enough for me to work with for my piece. Overall, without the helpful scientists, I would not be able to achieve in sound recording.

During the 6 working days on the Navarino Island, each day I recorded a few tracks. On March 19, which was the first working day, I obtained 10 recordings from Omora Park and 4 recordings from the seashore of Puerto Williams. The next day, I went to the east side of the island and recorded 34 tracks. I took the third day off due to physical fatigue as a result of walking and climbing in the woods which I was not accustomed to. The fourth day, March 22, 10 tracks were recorded at Cerro La Bandera, and 10 files were collected at Omora Park. The following day I visited Omora Park and obtained more recordings. I was able to get 21 recordings. On March 24, I went to Omora Park again to capture the cries of the thorned-tailed rayadito and the finch during bird banding. I recorded 15 sound tracks in which most of them were not obtained during the banding, but captured during walks in the forest when waiting for the birds to be netted. The next day was rainy so no outdoor research was made possible. So I stayed in the field research station, which is a building that hosts researchers who temporary visit the island. It was also the place that I stayed during the period of my visit. Even though the ground were still mushy, on March 26, I went to Cerro La Bandera and recorded 4 tracks, and to Península Zañartu where I obtained 12 raw recordings. That day was the last of my working days and I thought I was capable of walking to these two locations by myself. Going to the woods by myself allowed more freedom than going with researchers. For example, I can decide how long I want or need to stay at one spot. Whereas, going with scientists, I can obtain many usable recordings but I can only stay at each spot a short period of time due to the nature of their investigations which requires them to move frequently in the forest. Overall, the days spent on Navarino Island was productive.

After I came back from Chile, I began working on the piece, Coexisence. This piece was designed to be an ambisonic multichannel piece for 3D surround system and it was especially composed for the Bing Studio in which the premiere took place. I chose to compose an ambisonic piece was because this medium provide 3D surround sensation to the audience. Thus, I focused on creating a sound field instead of composing for individual speakers. If I chose to compose for individual speakers, sound sources may be identified by the listeners. Thus, a simulation of forest, which is essentially a giant resonance space, could not be built. Since the Bing Studio is shaped like a giant rectangular resonance box, it would be practical to create a sound field in it. By creating a sound field, with the recordings embedded in the piece, audience will naturally feel like as if they are in a forest. Using ambisonic technique, my initial hope was to allow listeners to put their shoe in a person who experienced the forest. Therefore, the piece became a journey for the audience.

The duration of the piece was initially designed for 10-11 minutes. Within 10-11 minutes, I envisioned that my musical idea to be delivered properly. However, due to both technical constraint and artistic issues encountered during the process of composing Coexistence, great amount of time was spent without fruitful results. Thus, for the interest of time, I decided to compose for approximately 6 minutes in order for the piece to be completed on time and to be performed. The resulting composition was 6 minutes and 10 seconds. Within this duration of time, I tried to convey the essence of coexistence between mankind and nature. Even though I was not able to meet the initial ambitious goal, it was still a complete piece.

The structure of Coexistence resembled a symphonic one. There was an introduction, statement I, statement II, transition, development and a recapitulation. In the introduction, I began with wind and water, in which the life began with. The calls of juvenile of the magellanic woodpeckers were chosen to be the first appearance of bird and also the first representation of a life. With the sound of knocking of the birds, statement I intended to put audience in a shoe of an adventurer in the forest. In other words, I wanted to let audience feel as if they are in a forest. Statement II introduces mankind. Without using human voices, I chose to use the sound of footsteps in the forest, truck and airplanes to provide information about human. The transition was relative calm in comparison to other sections of the piece. I used the sound of an airplane as the main sonic object. After all, airplane, a product of technology, is a human version of bird. The airplane, at a certain level, also provided clues of the statements that the piece speaks for, that is, both human and nature can coexist if balance can be obtained. The development pushed all of the elements to the waterfall, the main object in the climax. After the waterfall, juvenile appeared and accompanied by thrush and river in the background to end the piece in tranquility.

Both the rehearsal and the premier of Coexistence took place in Stanford Bing Studio. The rehearsal was held on the day before the concert on May 19, 2015. Great technical difficulties were met. The project, in an Ardour session, seemed to be too large to handle by computer, whether software or hardware was unknown. Almost half of the signals were not able to be displayed and played back. Professor Lopez-Lezcano tried numerous ways to try to solve the issue. In the end, he settled on importing channels from the mixed version to a newly created Ardour session and played back from the new session. The premier was held in the Bing Studio 7:30pm on May 20 (Wednesday), 2015 as a part of the MUSIC 222 Sound in Space course concert and also as a part of Center for Computer Research in Music and Acoustic (CCRMA)'s annual Spring concert. The pieces presented that night were very diverse. Audience body included both familiar and unfamiliar faces. A friend of a UNT scientist who I worked with in Chile went to the concert. From the feedback that I have received, audience seemed to enjoy the program. For my piece, its premier was successful.

During the process of composing Coexistence, technical and artistic difficulties were met. Before I can use any of the raw recordings, I needed to clean the environmental sound from the selected ones. There were in total of 120 sound files of various duration. Each of the sound files had its own distinct back ground noises. In order to filter out the ones that I think has the most potential to work with, I listened to all of the recordings and picked 72 out of the 120 raw recordings. I went through this filtering process twice, which eventually left me with 40 recordings to edit. The editing software that I chose to use were Interactive Source Separation Editor (ISSE) and Audacity, which was developed by colleagues of CCRMA. Using ISSE, I was able to keep the sound that I want to work with in a recording and leave the unusable by “painting” on the spectrogram, the visualization of the sound in the frequency domain, of the recording files. The wanted and the unwanted part of the sound file each was defined by a color, either red or blue, for the software to separate the two sources. Audacity was used to apply a general filter, usually a high pass and a low pass filer, to a recording file depending on the circumstance. It was also used to trim the files during the cleaning process. In order to generate intermediate ambisonic sound files, I used ICST externals for Max/Msp and Ardour. ICST external allowed me to load cleaned sound files, encode to ambisonics and play back through speakers. During the encoding and playback, I controlled the position and the movement of the sound by dragging the sound source objects on the ambi-control panel in the ICST external. These ambisonic signals were then recorded in Ardour in real time. These recorded intermediate files were used as elements in the piece. Since Ardour is a software designed for user to record, edit and mix sound files, Ardour was naturally the software that I chose to put together the piece. However, when I was generating intermediate files, I ran into Ardour problems and computer hardware issues. Both Ardour and Max/Msp were opened in my Mac Book Pro with the RAM of 8 GB, and ran on a hard disk with two cores. The speed of playing back the 16 ambisonic encoded signals exceed the reading speed of a HD. As a result, Ardour frequently complained and simultaneously stopped the play back to an extend that it was impossible to get a full picture of the piece in its working stage. Therefore, for the most of the part of Coexistence, I worked “blind fold” on my laptop. After completing a certain stage, I transfer the Ardour session to the powerful Linux machine in the listening room which is located in CCRMA, and play back from it. Another Ardour issue that I encountered was that Ardour quits by itself randomly without giving any message. The cause of this may be that the work load of the project session exceeded the working capability of my laptop hardware. Fortunately, with the help from the computer systems administrator at CCRMA, the causes of these issues were identified and optional methods were made possible.

Furthermore, I encountered artistic issues. Due to the nature of the project, which was to convey through music, the idea of ecology conservation, I constrained myself on the selection of the sound files in which the calls of the bird were, at the raw recording state, clean. Initially I planned to incorporate as much of these sound files as possible in order for the listeners to identify birds. In the middle of the agonizing selection process, Alexandra pointed to me that I should focus on what I think instead of what other will think of the piece while working. Having this valuable advice in mind, I was able to work smoothly. Eventually, sketches of the score and the structure for the piece were conceived.

While putting together the intermediate sound files in their appropriate places in Ardour session, I realized that many further adjustment needed to be done. The first adjustment was to make a sonic object sound “real”. For example, in order to make a recording of a waterfall sound like a waterfall, I needed to combine 3 frequencies of waterfall sound files. One had high frequency, second stayed at the original frequency and the last had a low frequency. I created these 3 frequencies of waterfall sound files using Audacity. Along with these 3 frequencies of waterfall, I also needed to incorporate the sound of river. So 4 components made the waterfall. For many of the sound files, this was the trick that I applied to make an object sound “real”. Real in terms that the listeners would be able to recognize the object, but not real because it was a result of post editing. For the more abstract components exist in the forest, such as the wind, I relied on the sound of tangible objects or bird calls to make them sound as if they were part of the scene. In the case of creating the wind, I took a segment of silence from one of the recording. The silence that the recorder captured contained signals at a very low amplitude. In other words, they were actually the “environmental sound” or speaking from the signal processing perspective, they were noise. So I took advantage of its characteristic of softness, and rendered them to make them continuous and random like the wind and the breeze. However, in reality, wind and breeze do not sound. Thus, in order to convince the audience what they hear was the sound of the wind, I experimented using the sounds of tangible objects to put together with these abstract components, and tried to find a balance between the two. It was this balance that I was striving for so to persuade the listeners. In the beginning of the piece, the wind was the first sound to be heard. After the wind was the river. Wind was the abstract component and the river was a tangible object which you can see and touch. Even though water, when it stays still, do not make any sound, but when they run, like a river, it sounds. Therefore, it worked in the beginning of the piece. In essence, I manipulated between the tangible and the intangible in order to create sonic scenes. This technique was applied to various places in the piece. Another adjustment that I needed to make was to exaggerate and manipulate the prominent characteristics in sound files. For example, the duration of a call of the thrush was longer than that of a thorn-tailed rayadito. So, by shortening, elongating, pitch shifting, and applying other effects, I was able to produce rhythmic patterns. For many of the sound files, applying effects was the method to create variants. With the intermediate recordings, rendered sound files, and a plan for the piece, I composed Coexistence.

For the future, I would like to revise Coexistence and perform the new version. First, I will edit some of the sound files to the usable level for ambisonic recording in order to prevent from creating numerous ambisonic tracks in Ardour. Moreover, I will organize the existing Ardour session and re-record some of the intermediate ambisonic sound files. By doing this, I can reduce the number of tracks in the session. Therefore, prevent Ardour from crashing due to the large work load. Lastly, I will extend the piece to be 10-11 minutes to meet the initial design of the piece. I expect to work on these aspects between July 1st and Sept 30th, 2015. The revised version may be a part of the program of CCRMA Transition or CCRMA Fall Concert of 2015. I am optimistic for achieving these extensions of the project.

Overall, the project “Field Recording, Composition and Performance for Ecology Conservation in Patagonia, Chile” was completed within the proposed timeline, and it was a fruitful project. Sound recording was done in the Patagonia region, Chile, specifically on Navarino Island. Moreover, a piece that incorporates these recordings was composed at Stanford and premiered on May 20, 2015 in the Stanford Bing Studio. Even though I encountered numerous difficulties during the process, in the end, goals of this project were accomplished. Without the assistance from the Center for Latin American Studies, Professor Takako Fujioka, Iran Roman, Professor Jaime Jimenez, Amy Wynia, Professor Fernando Lopez-Lezcano, Alexandra Hay and researchers from CCRMA and investigators from Sub-Antarctic Bioculture Conservation Program, I would not be able to complete this project. This project began with a simple idea that is to deliver the message of ecology conservation through music and ended with an unforgettable premier.

pdf version of this report

Research Project Report

return to project homepage