PROFESSIONAL

LARGE-TEAM PROJECTS

Below is a selection of products I have contributed to as an employee. They are listed in chronological order.

Cruise Origin by Cruise is an upcoming driverless vehicle with no steering wheel. I'm an engineering manager in the AI department, and my teams have covered sensor simulation and authoring tools for simulated driving scenarios. Click on the image to read more about sensor simulation.
Display.land by Ubiquity6 was a photogrammetry app that turned objects and environments into 3D meshes, which could then be used in shared AR and VR experiences. I started out as a tech lead for their proprietary game engine and API, and then became an engineering manager for the studio team. (The company and app are no more, but you can check out Polycam for the successor app that was spun off by our computer vision team.)
Sansar by Linden Lab: After taking over as their audio engineer, I added live audio streaming, in-editor preview, audio functionality for our script API, and obstruction/occlusion of sounds by walls and static objects. I also recruited internal talent to create music and sound effects for our app. As the tech lead across our content creation teams, I found and resolved pain points in our development processes, as well as eliminating friction from our editor workflows.
Mafia 3 by Hangar 13 (2K Games): As their audio engineer, my first responsibility was to integrate Wwise from scratch for Hanger 13's proprietary game engine. I stayed onboard to support audio tech needs for Mafia 3 and related downloadable content, which we shipped on PC, XboxOne, and PS4.
Star Wars: 1313 and Star Wars: First Assault by LucasArts: I continued my role as audio engineer for these games, and we continued to use Wwise as the audio engine. I also took over database development for the dialogue system. (Unfortunately, our studio closed before either game was released.)
Star Wars: The Force Unleashed 2 by LucasArts: I continued with my audio and build roles for this game and related content. While we had used our own in-house audio engine on the first game, for the sequel we integrated Wwise (Audiokinetic's commercial sound engine). Like the first game, we shipped this on PC, Xbox360 and PS3.
Star Wars: The Force Unleashed by LucasArts: I was part of the audio development team behind this game, as well as the sole build engineer. Instead of using canned animation, characters and materials uniquely react to stimuli using NaturalMotion's Euphoria and Pixelux's DMM (Digital Molecular Matter) technologies, respectively. Furthermore, this was our first game built on Zeno/Ronin, the rendering engine and authoring suite used by Industrial Light and Magic.
EUCON Protocol by Euphonix (now Avid): I was on the team behind the first audio consoles featuring the EUCON protocol (which is still used today), in particular the MC Controller and System-5 MC. What made them stand out from other mixing consoles was the protocol's ability to integrate with Pro Tools, Logic Pro, Nuendo, Pyramix, Digital Performer, Final Cut Studio and more.

SMALL-TEAM PROJECTS

Below is a list of projects I've conducted on my own or in a small team.

↑ live demo | GitHub

↓ bottom player uses Gapless 5 as well

Gapless 5 is a JavaScript audio player I wrote to solve the problem that HTML5 Audio doesn't support seamless transitions, while WebAudio can't play a track until fully loaded.

To work around this, Gapless 5 utilizes both libraries. If WebAudio hasn't fully loaded yet, Gapless 5 starts playback with HTML5 Audio. Then it crossfades to WebAudio once it's loaded. Several others have contributed to this project on GitHub over the years.
Daily Fuzzy is a mobile app that collects adorable animal pictures and videos using Reddit's API. I created this for my partner, who thought it should be put on the App Store. Now available for both iOS and Android devices.

Update: I've made the code open-source. Here is the source for iOS and the source for Android.
get_cover_art is a Python package and command line tool that downloads and embeds cover art for your audio files. It requires no manual intervention and is more reliable than Apple Music's "Get Album Artwork" feature (though it still uses their catalog of standardized high-quality artwork).

Clicking the album cover takes you to the PyPI project page, but you can also visit the GitHub page.
Mobile Fusion Tables was a mobile-friendly web template that turned your Fusion Tables data into a navigable and searchable web app. I worked on this with a small team at Code for America's local (SF) brigade. This project took existing work for searchable Fusion Table maps and made it mobile-friendly and easily customizable.

Google has since discontinued Fusion Tables, but you can still visit our old GitHub page from the icon on the left.
gallery_get is a Python package and command line tool that crawls and downloads images from gallery sites. I got tired of galleries that redirect their image links, so I wrote this to crawl the redirects for me. It has plugins for different sites.

Clicking the icon takes you to the PyPI project page, but you can also visit the GitHub page.
Doblet (now Dash Plus) was a network of portable chargers for your smartphone- you could borrow such devices at libraries, bars, co-working spaces, and other venues in the San Francisco Bay Area. I was the first iOS developer for this Y Combinator backed company, and later coordinated engineering dependencies between Android/iOS clients and the Rails backend. Click the icon to see a demo video of Doblet's integration with Uber.
DemandVille was a Rails application to help small businesses decide what products and features to build, based on customer feedback and pre-orders. Gregor Hanuschak and I worked together on this project, where he drove the business side and I was responsible for the tech.

In the end we didn't find a product-market fit, but I gained the perspective of co-founding a business, pitching to investors and recruiting talent. Click the icon to see a demo of how our site worked.

DESIGN / WEB

Below is a list of designs and websites I've created.

The Alchemist's Guide to Alcoholic Beverages maps alcoholic beverages between the four classical elements of fire, earth, water, and air, grouped and color coded by food pairings and other categories.

Update: Taschen published my graphic as part of their book, Food & Drink Infographics! Order the book here.
Cookbook for Nerds / Flowchart Kitchen: I envisioned and developed a diagram-based cookbook with the help of my sister. It's called "Cookbook for Nerds" and currently available as an e-Book on Kindle, Scribd, and Google Play.

Update 1: Taschen published our banana bread recipe as part of their book, Food & Drink Infographics! Order the book here.

Update 2: We teamed up with a designer to make a new edition, which I'm also turning into an online version.
The Dark Side of the Bay: The BART (subway) map of the SF Bay Area already looks like the diffracted light beam from a certain album cover... so here is my take on it. It's available as a T-shirt or poster from two stores: Society 6 and Threadless.
THE402: An interactive randomized "album", created as a collaboration between myself and two German industrial musicians. I implemented the site and utilized my Gapless 5 player featured above. The code for the site itself is open source and hosted here.
Zen Finger Painting: A band website that takes you to a listening page for the latest album. It uses my Gapless 5 player featured above.

Note: I intentionally obfuscated the site's JavaScript to protect against webcrawlers downloading our album.
This site (how meta is that): It utilizes my Gapless 5 audio player and queries public Google Drive spreadsheets to populate the record collection catalog on my personal page, but is otherwise static. I've periodically maintained it since 2001, updating its conventions over the years to comply with changing browsers and requirements.

AUDIO RESEARCH

Below is a selection of audio research projects I have conducted or been involved in.

Lathe Cut Vinyl Records: Audio Comparisons and Reviews is my first informational video for the general public. I spent a year researching and trying out different lathe cut services to have my own music pressed to vinyl records (watch them being played here). Not everyone has the time and money to do the research and try out multiple services, so I made this video to help others make an informed decision.
Gamelan Sequencer: I found that western scores and MIDI files aren't well-suited for composing and playing pieces for a gamelan ensemble, so I decided to write my own format. It's inspired by the kepatihan cipher system and features a corresponding sequencer script in Python. Provided with instrument samples (which you can override), the script turns a score into a recording. I was fortunate enough to find instrument samples of the UC Davis Gamelan Ensemble, recorded for ketuk-ketik.com by Elisa Hough, and with permission I'm using said samples to seed this system. Click on the kantilan to the left to visit the repository, or scroll below to hear a piece I transcribed with it.
HRTF Calibrator: While in Stanford's CCRMA program I invented a calibration system to customize head-related transfer functions (HRTFs) to individuals without having their ears measured, modeled, or fitted with a microphone. It's based on the same concept as an eye exam: the subject listens to pairs of stimuli that are spatialized using slightly different HRTFs and decides which one of each pair sounds better-spatialized. When the calibration program is completed, my program saves a custom configuration based on the calibration results. The subject can then use my program to hear 3D demos or spatialize their own audio samples. Click on the headphones to the left for more information and downloads, and click here for the related patent application.

Audio Codec: I wrote an audio codec in C with two other people for a course in Stanford's CCRMA program. The encoder and decoder files are runnable from a Linux terminal, differentiates between tone and noise maskers, and allows the user can specify the bitrate, blocksize, alpha (mask addition coefficient), and choose among several masking functions. Click on the cowbell to the left to read the documentation. Click on the Schubert song samples below to hear the sound quality of the encoded audio:
Phase Perception Model: During the first two quarters at Stanford I worked with Professor Malcolm Slaney and classmate Hiroko Terasawa on comparing experimental results of phase perception with Prof. Slaney's MATLAB perception model. Click the ear on the left to see my documentation and LISP code. Also, here is the slide show from our March 24, 2004 presentation at the pre-CoSyNe workshop on Auditory Processing of Vocalizations and other Complex Sounds at the Cold Spring Harbor Laboratory, New York.
Foosball Live! In a team with Wai Kit Leung and Ariege Misherghi for a human-computer interface course in Stanford's CCRMA program, I modified a foosball table to create chance music and realistic crowd reaction based on game play, accompanied by announcer sound bites for scored goals and victory music cued at the end of each game. The sensors are wired to an AVRmini chip running a C program that I coded to find the rate of motion of each pole and send the values via OSC to our patch in Pure Data (multi-platform version of Max/MSP). We also used a third-party patch for the crowd reaction (written by Paul Leonard, featured here).
Pee-wee's Pencil Sharpener is a rotary pencil sharpener that I wired to act as a talking Jack-in-the-box for a human-computer interface course in Stanford's CCRMA program. It "sings" the melody to "Pop Goes the Weasel" at the rate of which the crank is turned. When the crank causes the pencil brace to retract, the verse melody jumps to the refrain with a Pee-wee Herman style laugh. I wired a bend sensor and a mechanical rotary encoder to an AVRmini chip running a C program that calculates crank speed and sets a flag when the bend sensor crosses a threshold, sending the filtered values via OSC to a Pure Data patch that plays and manipulates the samples accordingly. Click on the thumbnail for a video clip.
Jegogan Synth: Inspired by the unique tuning system of Balinese gamelan ensembles, I coded a patch in Pure Data that plays Jegogan pairs based on MIDI input for a course in Stanford's CCRMA program. The Jegogan is played with a single mallet, while the free hand manually mutes the otherwise undamped bars. In order to make the keyboard 'feel' like a Jegogan, I had the black keys sound each note (Jegogans have 5 notes to an octave) while the white keys mute the notes. Click on the left screenshot for more information, or here to obtain the patch.
Mozart's Eine Kleine Nachtmusik: Behind the Editions was my senior thesis for MIT's music program. This musicology paper investigates discrepancies between the editions of this piece, particularly regarding the treatment of staccato markings. It features tables I painstakingly assembled to compare such markings between the original autograph, early and later editions.

PRODUCTION WORK

Below is a list of audio projects I have produced.


Home Studio Recordings

See my music page. All recordings of bands and solo efforts were produced by me, except where indicated.

MIT Media Lab

Sound design for Flavia Sparacino's interactive exhibit of Kent Larson's Unbuilt Ruins (1999-2000) Post-production for Tod Machover's Hyperinstrument press kits and Toy Symphony concerts (2001-2003)

HUSEAC (Harvard University)

Assisted Tod Machover's Hyperstring Trilogy (2003)

CCRMA (Stanford University)

Recorded and produced various jazz and rock groups at CCRMA's music studios

Home Studio: Post-Production Projects

Stereo remixes of Beach Boys songs and albums that were released only in mono 80s pop and electronica mixes with vocal sections removed. Currently playing at Zushi Puzzle in San Francisco. Restorations of vinyl recordings

COMPOSITIONS

Below is a sample of electronic and traditional compositions I have made.

Invention in D minor is a piece I composed for Peter Child's "Writing in Tonal Forms" course at MIT. It follows Bach's style of inventions for the harpsichord, and this recording is from a sight reading by Mark Kroll. I went through several iterations of the composition with guidance and feedback from Peter Child before arriving at the final version presented here.
Kotekan Sonatina is a piece I composed for Evan Ziporyn's "Music of Indonesia" course at MIT. The form of the piece is based on sonata form, while the counterpoint is based on Balinese Kotekan, which consists of interlocking patterns between parts. A "score" of the composition can be found here.
  • Javanese version: This new version sounds closer to my original intent, though instead of using Balinese instrument samples it uses Javanese ones recorded of the UC Davis Gamelan Ensemble. I wrote a Python script (described above) to transform my original score into a recording using the samples. I also made some minor adjustments to the first half of the composition.
  • TS-10 version: Originally, I had recorded this on an Ensoniq TS-10 synthesizer / sequencer using synth mallet instruments with western tuning, which made it sound more like Steve Reich than a gamelan performance.
Screwdrivabilitation is a piece I created for Tod Machover's "Projects in Media" course at the MIT Media Lab in 2002, combining distortion from a 4-track cassette tape recorder and synthesized AM and FM waves via Max/MSP. It was a commentary about our persistent fascination with analog despite the takeover of digital technology (and this was before vinyl's comeback!). The piece depicts a struggle between analog and digital sounds, such that as the piece progresses, the digital sources get warmly distorted while the analog channels align in phase. (Spoiler alert: analog wins.)

My initial take on this piece was a bit on the droning and repetitive side, but Tod Machover's guidance and feedback helped me appreciate the effect of developing a piece over time- as in development throughout the piece itself, but also the practice of revisiting your work and listening to it with fresh ears.

Thank You for Riding the T started out as a musique concrète piece for Evan Ziporyn's course in computer music at MIT in 2002. I had recorded water sounds and edited them to align with a recording I made of a subway train's arrival and departure from the Kendall/MIT station, as well as sounds from the musical sculpture at that station, Kendall Band.

After the course I further developed this piece by mixing in convolutions from samples of (1) the source water sounds, (2) the Kendall Band sculpture, and (3) Screwdrivabilitation. The result is featured here, evoking the wetness and industrial terrain of Boston, particularly during those days of the Big Dig.
Blasphemous Bosphorus is a piece I programmed in LISP using CM and CLM libraries for Chris Chafe's course in computer music at Stanford University in 2004. This piece explores Middle Eastern rhythms and tones but pushes beyond what is possible with physical instruments.

I used samples of a darbuka, a drum used in traditional Turkish music, to play rhythms from the eight traditional Arabic modes outlined by Safi-ad-Din in the 13th century. I also used a string software model to synthesize a saz-like electronic instrument. A saz (a.k.a. bağlama) is a traditional Turkish stringed instrument with 16 frets to an octave. The non-western tuning of the instrument led me to use the Persian Rast scale. More details and LISP code can be found here.

All tracks copyright © 1999- Regaip Sen

To read and hear recordings of my extracurricular involvement in music, please visit my main music page.