PROFESSIONAL |
Below is a selection of products I have contributed to as an employee. They are listed in chronological order.
![]() |
Cruise Origin by Cruise is an upcoming driverless vehicle with no steering wheel. I'm an engineering manager in the AI department, and my teams have covered sensor simulation and authoring tools for simulated driving scenarios. Click on the image to read more about sensor simulation. |
![]() |
Display.land by Ubiquity6 was a photogrammetry app that turned objects and environments into 3D meshes, which could then be used in shared AR and VR experiences. I started out as a tech lead for their proprietary game engine and API, and then became an engineering manager for the studio team. (The company and app are no more, but you can check out Polycam for the successor app that was spun off by our computer vision team.) |
![]() |
Sansar by Linden Lab: After taking over as their audio engineer, I added live audio streaming, in-editor preview, audio functionality for our script API, and obstruction/occlusion of sounds by walls and static objects. I also recruited internal talent to create music and sound effects for our app. As the tech lead across our content creation teams, I found and resolved pain points in our development processes, as well as eliminating friction from our editor workflows. |
![]() |
Mafia 3 by Hangar 13 (2K Games): As their audio engineer, my first responsibility was to integrate Wwise from scratch for Hanger 13's proprietary game engine. I stayed onboard to support audio tech needs for Mafia 3 and related downloadable content, which we shipped on PC, XboxOne, and PS4. |
![]() |
Star Wars: 1313 and Star Wars: First Assault by LucasArts: I continued my role as audio engineer for these games, and we continued to use Wwise as the audio engine. I also took over database development for the dialogue system. (Unfortunately, our studio closed before either game was released.) |
![]() |
Star Wars: The Force Unleashed 2 by LucasArts: I continued with my audio and build roles for this game and related content. While we had used our own in-house audio engine on the first game, for the sequel we integrated Wwise (Audiokinetic's commercial sound engine). Like the first game, we shipped this on PC, Xbox360 and PS3. |
![]() |
Star Wars: The Force Unleashed by LucasArts: I was part of the audio development team behind this game, as well as the sole build engineer. Not only does it comprise the next chapter in the Star Wars story (taking place between Episodes III and IV), it also showcases a lot of new technology for the first time. Instead of canned animation, characters and materials uniquely react to stimuli using NaturalMotion's Euphoria and Pixelux's DMM (Digital Molecular Matter) technologies, respectively. Furthermore, this was our first game built on Zeno/Ronin, the rendering engine and authoring suite used by Industrial Light and Magic. |
![]() |
System-5 MC by Euphonix (now Avid): I was part of the software development team behind the first products featuring the EUCON protocol, in particular the MC Controller and System-5 MC. What made them stand out from other mixing controllers was their ability to integrate with Pro Tools, Logic Pro, Nuendo, Pyramix, Digital Performer, Final Cut Studio and more. In contrast, a mouse offered far less control at one time, a surface made for a single product wouldn't work with other programs, and HUI provided less control than EUCON and couldn't integrate with Nuendo or Pyramix. |
Below is a list of projects I've conducted on my own or in a small team.
↑ live demo | GitHub |
Gapless 5 is a JavaScript audio player I wrote to solve the problem that HTML5 Audio doesn't
support seamless transitions, while WebAudio can't play a track until fully loaded.
To work around this, Gapless 5 utilizes both libraries. If WebAudio hasn't fully loaded yet, Gapless 5 starts playback with HTML5 Audio. Then it crossfades to WebAudio once it's loaded. Several others have contributed to this project on GitHub over the years. |
![]() |
Daily Fuzzy is a mobile app that collects adorable animal pictures and videos using Reddit's API.
I created this for my partner, who thought it should be put on the App Store. Now available for both iOS and Android
devices.
Update: I've made the code open-source. Here is the source for iOS and the source for Android. |
![]() |
get_cover_art is a Python package and command line tool that downloads and embeds cover art for your
audio files. It requires no manual intervention and is more reliable than Apple Music's "Get Album Artwork"
feature (though it still uses their catalog of standardized high-quality artwork).
Clicking the album cover takes you to the PyPI project page, but you can also visit the GitHub page. |
![]() |
Mobile Fusion Tables was a mobile-friendly web template that turned your Fusion Tables data into a
navigable and searchable web app. I worked on this with a small team at Code for America's local
(SF) brigade. This project took existing work for searchable Fusion Table maps and made it mobile-friendly
and easily customizable.
Google has since discontinued Fusion Tables, but you can still visit our old GitHub page from the icon on the left. |
![]() |
gallery_get is a Python package and command line tool that crawls and downloads images from gallery sites.
I got tired of galleries that redirect their image links, so I wrote this to crawl the redirects for me.
It has plugins for different sites.
Clicking the icon takes you to the PyPI project page, but you can also visit the GitHub page. |
![]() |
Doblet (now Dash Plus) was a network of portable chargers for your smartphone- you could borrow such
devices at libraries, bars, co-working spaces, and other venues in the San Francisco Bay Area. I was the
first iOS developer for this Y Combinator backed company, and later coordinated engineering dependencies between
Android/iOS clients and the Rails backend. Click the icon to see a demo video of Doblet's integration with
Uber.
Update: Doblet was acquired and rebranded as Dash Plus. |
![]() |
DemandVille was a Rails application to help small businesses decide what products and features to build,
based on customer feedback and pre-orders. Gregor Hanuschak and I worked together on this project, where he
drove
the business side and I was responsible for the tech.
In the end we didn't find a product-market fit, but I gained the perspective of co-founding a business, pitching to investors and recruiting talent. Click the icon to see a demo of how our site worked. |
Below is a list of designs and websites I've created.
![]() |
The Alchemist's Guide to Alcoholic Beverages maps alcoholic beverages between the four
classical elements of fire, earth, water, and air, grouped and color coded by food pairings and other categories.
Update: Taschen just published my graphic as part of their new book: Food & Drink Infographics! Order the book here. |
![]() |
Cookbook For Nerds: I envisioned and developed a diagram-based cookbook with the help of my sister.
It's currently available on Kindle, Scribd, and Google Play, and I've begun to reach out to publishers in the
hopes of getting it into retail stores.
Update: Taschen just published our banana bread recipe as part of their new book: Food & Drink Infographics! Order the book here. |
![]() |
The Dark Side of the Bay: The BART (subway) map of the SF Bay Area already looks like the diffracted light beam from a certain album cover... so here is my take on it. It's available as a T-shirt or poster from two stores: Society 6 and Threadless. |
![]() |
THE402: An interactive randomized "album", created as a collaboration between myself and two German industrial musicians. I implemented the site and utilized my Gapless 5 player featured above. The code for the site itself is open source and hosted here. |
![]() |
Zen Finger Painting: A band website that takes you to a listening page for the latest album. On certain
browsers it uses my Gapless 5 player
featured above.
Note: I intentionally obfuscated the site's JavaScript to protect against webcrawlers downloading our album. |
![]() |
This site (how meta is that): It queries public Google Drive spreadsheets to populate the record collection catalog on my personal page, but everything else is static. I've periodically maintained it since around 2001, updating its conventions over the years to comply with changing browsers and requirements. |
Below is a selection of audio research projects I have conducted or been involved in.
![]() |
Lathe Cut Vinyl Records: Audio Comparisons and Reviews is my first informational video for the general public. I spent a year researching and trying out different lathe cut services to have my own music pressed to vinyl records (watch them being played here). Not everyone has the time and money to do the research and try out multiple services, so I made this video to help others make an informed decision. |
![]() |
Gamelan Sequencer: I found that western scores and MIDI files aren't well-suited for composing and playing pieces for a gamelan ensemble, so I decided to write my own format. It's inspired by the kepatihan cipher system and features a corresponding sequencer script in Python. Provided with instrument samples (which you can override), the script turns a score into a recording. I was fortunate enough to find instrument samples of the UC Davis Gamelan Ensemble, recorded for ketuk-ketik.com by Elisa Hough, and with permission I'm using said samples to seed this system. Click on the kantilan to the left to visit the repository, or scroll below to hear a piece I transcribed with it. |
![]() |
HRTF Calibrator: While in Stanford's CCRMA program I invented a calibration system to customize head-related transfer functions (HRTFs) to individuals without having their ears measured, modeled, or fitted with a microphone. It's based on the same concept as an eye exam: the subject listens to pairs of stimuli that are spatialized using slightly different HRTFs and decides which one of each pair sounds better-spatialized. When the calibration program is completed, my program saves a custom configuration based on the calibration results. The subject can then use my program to hear 3D demos or spatialize their own audio samples. Click on the headphones to the left for more information and downloads, and click here for the related patent application. |
![]() |
Audio Codec: I wrote an audio codec in C with two other people for a course in Stanford's CCRMA program. The encoder and decoder files are runnable from a Linux terminal, and the user can specify the bitrate, blocksize, alpha (mask addition coefficient), and choose from the following masking functions: Two-slope, Schroeder, Model 1, Model 2 or Terhardt. In addition, I formulated our own criteria to differentiate between tone and noise maskers. Click on the cowbell to the left to read the documentation. Click on the Schubert song samples below to hear the sound quality of the encoded audio: |
![]() |
Phase Perception Model: During the first two quarters at Stanford I worked with Professor Malcolm Slaney and classmate Hiroko Terasawa on comparing experimental results of phase perception with Prof. Slaney's MATLAB perception model. Click the ear on the left to see my documentation and LISP code. Also, here is the slide show from our March 24, 2004 presentation at the pre-CoSyNe workshop on Auditory Processing of Vocalizations and other Complex Sounds at the Cold Spring Harbor Laboratory, New York. |
![]() |
Foosball Live! In a team with Wai Kit Leung and Ariege Misherghi for a human-computer interface course in Stanford's CCRMA program, I modified a foosball table to create chance music and realistic crowd reaction based on game play, accompanied by announcer sound bites for scored goals and victory music cued at the end of each game. We labeled each pole with black-and-white stripes and affixed an optical sensor to detect pole motion, and we attached piezo disc (microphone) sensors to each goal casing to detect scored goals. The sensors are wired to an AVRmini chip that runs a small C program which I coded to find the rate of motion of each pole and send the values via OSC to a patch in Pure Data (multi-platform version of Max/MSP). I wrote the patch to select instruments based on the score of each team and sample them at a rate proportional to the motion of the corresponding pole. We also used a third-party patch for the crowd reaction (written by Paul Leonard, featured here). |
![]() |
Pee-wee's Pencil Sharpener is a rotary pencil sharpener that I wired to act as a talking Jack-in-the-box for a human-computer interface course in Stanford's CCRMA program. It "sings" the melody to "Pop Goes the Weasel" at the rate of which the crank is turned. When the crank causes the pencil brace to retract, the verse melody jumps to the refrain with a Pee-wee Herman style laugh. I attached a bend sensor to the pencil brace to indicate when the handle is retracted. Then I inserted a mechanical rotary encoder to the inside of the crank to detect its rotation. A short C program in the AVRmini calculates crank speed and sets a flag when the bend sensor crosses a threshold. These values are sent via OSC to a Pure Data patch that samples the verse based on the crank rotation and interrupts with the refrain when the flag is triggered. Click on the thumbnail for a video clip. |
![]() |
Jegogan Synth: Inspired by the unique tuning system of Balinese gamelan ensembles, I coded a patch in
Pure Data that plays Jegogan pairs based on MIDI input for a course in Stanford's CCRMA program. The Jegogan is
played with a single mallet, while
the free hand manually mutes the otherwise undamped bars. In order to make the keyboard 'feel' like a Jegogan,
I had the black keys sound each note (Jegogans have 5 notes to an octave) while the white keys mute the notes.
Note that the Jegogan sample can be replaced by another instrument to achieve the same effect. Users control
the following parameters of the tuning system (click on the left screenshot for more information, click here to obtain the patch):
|
![]() |
Mozart's Eine Kleine Nachtmusik: Behind the Editions was my senior thesis for MIT's music program. This musicology paper investigates discrepancies between the editions of this piece, particularly regarding the treatment of staccato markings. It features tables that compare markings between the autograph and later editions. |
Below is a list of audio projects I have produced.
Home Studio Recordings |
|
See music page for selected tracks. | |
MIT Media Lab |
|
|
Unbuilt Ruins exhibit (later I incorporated some of the audio into ZFP's Human Spaceflight for European Citizens, Part 2) |
|
Joshua Bell - Paganini's Caprice No. 24 (live, featuring Hyperviolin techniques) |
HUSEAC (Harvard University) |
|
|
Hyperstring Trilogy on YouTube |
|
|
|
JSWM Quartet - Someday My Prince Will Come |
|
See music page for selected tracks. |
|
"Elephant Dreams" is on the Stanford Soundtrack, Vol. 4 compilation album. |
Home Studio: Post-Production Projects |
|
|
Click here for details and MP3's. |
|
Visit my DJ Rego playlist to hear all my mixes. |
|
Visit my YouTube channel to hear my restorations. |
Below is a sample of electronic and traditional compositions I have made.
Play MP3 | Invention in D minor is a piece I composed for Peter Child's "Writing in Tonal Forms" course at MIT. It follows Bach's style of inventions for the harpsichord, and the recording is from a performance by Mark Kroll. |
Javanese version TS-10 version |
Kotekan Sonatina is a piece I composed for Evan Ziporyn's "Music of Indonesia"
course at MIT. The form of the piece is based on sonata form, white the counterpoint is based on Balinese Kotekan (which
consists of interlocking patterns between parts). A "score" of the composition can be found here.
|
Play MP3 | Screwdrivabilitation is a piece I created for Tod Machover's "Projects in Media" course at the MIT Media Lab, combining distortion from a 4-track tape recorder and synthesized AM and FM waves via Max/MSP. It's about our persistent fascination with analog in spite of the takeover of digital technology. At the start of the piece, the analog sources show off their ability to fluidly change their timbres but have trouble lining up rhythmically with other sources. The digital sources flaunt their precision, but lack in fluid analog effects. As the piece progresses, the digital sources gain analog control, while the analog sources align in phase. It seems as though both sources will mutually benefit, but soon the digital sources fade out from exhaustion of trying too hard to sound analog. Analog wins and follows through to end the piece. |
Play MP3 |
Thank You for Riding the T started out as a musique concrète piece for Evan Ziporyn's course in computer
music at MIT. I had recorded water sounds and edited them to align with a recording I made of
a train's arrival and departure from the Kendall/MIT subway station (also featuring sounds from the musical
sculpture at that station, Kendall
Band). Later on I revisited that piece, mixing in convolutions by samples from (1) the source
water sounds, (2) the station recording, and (3) Screwdrivabilitation.
The result is featured here.
This piece begins with a fight for control between the water samples and convoluted tracks. The fight escalates until the train arrives, which silences everyone in awe (and in mono). The other tracks, enlightened of their common ancestry, call out to the departing train in gratitude. This all happens in 1 minute and 59 seconds. |
Play MP3 | Blasphemous Bosphorous is a piece I programmed in LISP (using CM and CLM libraries) for Chris Chafe's course in computer music at Stanford University. I used samples of a darbuka, a drum used in traditional Turkish music. I also used a string software model to synthesize a saz-like electronic instrument. A saz (a.k.a. bağlama) is a traditional Turkish stringed instrument with 16 frets to an octave. The non-western tuning of the instrument led me to use the Rast scale for executing notes on the electronic instrument. More details and LISP code can be found here. |
All tracks copyright © 1999- Regaip Sen
To read and hear recordings of my extracurricular involvement in music, please visit my main music page.