This page gathers a series of tutorials around the Faust programming language that I wrote as part of various projects. It was not written in a "linear way", so each section is independent and covers a different topic. As a result, it is quite possible that you find some redundancy if you read this page from top to bottom. Should you have any question, feel free to send me an e-mail.

What is Faust?

Faust (Functional Audio Stream) is a functional programming language specifically designed for real-time signal processing and synthesis. Faust targets high-performance signal processing applications and audio plug-ins for a variety of platforms and standards.

The core component of Faust is its compiler. It allows to "translate" any Faust digital signal processing (DSP) specification to a wide range of non-domain specific languages such as C++, C, JAVA, JavaScript, LLVM bit code, etc. Thanks to a wrapping system called "architectures", codes generated by Faust can easily be compiled into a wide variety of objects going from audio plug-ins to standalone applications or smartphone and web apps, etc. (check the Faust README for an exhaustive list).

The goal of this page is to provide a series of tutorials on various topics around Faust going from "getting started/Faust 101" to more advanced topics to use Faust to generate APIs, etc. The best companion for these tutorials is the Faust Quick Reference (that can also be found in the Faust distribution itself). We strongly recommend you to briefly peruse it before doing any further readings on this page.

What is Faust Good For?

Faust's syntax allows to express any DSP algorithm as a block diagram. For example, + is considered as a valid function (and block) taking two arguments (signals) and returning one:

process = +;

A graphical block diagram representation of this expression can be generated using the faust2svg command line tool:

Blocks can be easily connected together using the : "connection" operator:

process = + : *(0.5);

In that case, we add two signals together and then scale the result of this operation.

Thus, Faust is perfect to implement time-domain algorithms that can be easily represented as block diagrams such as filters, waveguide physical models, virtual analog elements, etc.

Faust is very concise, for example, here's the implementation of a one pole filter/integrator equivalent to y(n) = x(x) + y(n-1)*p (where p is the pole):

process = +~*(p);

Codes generated by Faust are extremely optimized and usually more efficient that handwritten codes (at least for C and C++). The Faust compiler tries to optimize each element of an algorithm. For example, you shouldn't have to worry about using divides instead of multiplies as they get automatically replaced by multiplies by the compiler when possible, etc.

Faust is very generic and allows to write code that will run on dozens of platforms.

What is Faust Not (So) Good For?

Even though Faust is awesome (yes ;) ), it has some limitations. For instance, it doesn't allow to implement algorithms requiring multi-rates such as FFT, convolution, etc. While there are tricks to go around this issue, we're fully aware that it is a big one and we're working as hard as possible on it.

Faust's conciseness can sometimes become a problem too, especially for complex algorithms with lots of recursive signals. It is usually crucial in Faust to have the "mental global picture" of the algorithm to be implemented which in some cases can be hard.

While the Faust compiler is relatively bug-free, it does have some limitations and might get stuck in some extreme cases that you will probably never encounter. If you do, shoot us an e-mail!

Other Useful Resources

There are dozens of useful resources around Faust out here, this is a non-exhaustive list of some of them:


Faust Hero in 2 Hours!

The goal of this tutorial is to teach you how to use the basic elements of the Faust programming language in approximately two hours! While any "time domain" DSP (Digital Signal Processing) algorithm can be easily written from scratch in Faust, we'll just show you here how to use existing elements implemented in the Faust libraries, connect them to each other, and implement basic user interfaces (UI) to control them.

One of the strength of Faust lies in its libraries that implement hundreds of functions. Thus, you should be able to go a long way after this tutorial simply by using what's already out here.

This tutorial was written assuming that the reader is already familiar with basic concepts of computer music and algorithms.

More generally, at the end of this tutorial:

In you wish to get a "deeper" introduction to Faust, we recommend you to read the Faust quick reference or to take our Faust online course. In any case, it might be good keep it handy somewhere on your computer as you read this tutorial.

Setting-Up Your Development Environment

As this was mentioned earlier, the core of Faust-based technologies is the Faust compiler. While we won't really need it for this tutorial, we recommend you to install it on your system anyway as it might get useful later.

There are actually several versions of the Faust compiler: Faust0 (that will become Faust 1, one day) and Faust2. An overview of the differences between these two versions is available in the Faust README. We recommend you to use Faust0 for the tutorials presented on this page as it is simpler to install than Faust2, but feel free to install Faust2 if you'd like. To get the latest features of Faust, install it from the source code by getting a snapshot of our repository. After this, unzip the file and follow the instructions in the README. If you're a Windows user, be aware that Faust is a bit harder to install on this platform than on Linux and OSX, so you might want to skip this step for now (otherwise it will take you more than 2 hours to become a Faust hero).

One of the best tools to quickly develop Faust code on your computer is FaustLive. FaustLive can be downloaded here and is available on Linux, OSX and Windows! Make sure you read the README before you install it on your system, this might save you a lot of time later :).

While you might choose to use FaustLive for this tutorial, if you're really in a hurry, you can do this tutorial with the the Faust playground which will run directly in your web browser without having to install anything. The Faust playground should work on all platforms (even on iOS and Android!) at least in Google Chrome, Firefox and Safari. You might notice that some of the features of the Faust playground are not available depending on the browser you decided to use. For example, since Google Chrome only allows audio input on with https, you wont be able to get an audio input from http://faust.grame.fr with that browser, etc. Similarly, audio latency and computational efficiency might be an issue with some browsers.

The main advantage of using the Faust playground over FaustLive is that you can have access to many of the features of Faust without having to install anything on your system!

You're now ready to write your first Faust code so let's do it!

Making Sound

The Faust playground comes with a series of example Faust code that can be found in the LIBRARY tab. Click on it, and in the Instruments column, double click on Kisana. A new object should be placed in the workspace. Connect it to the speaker on the left and you should now hear some sound! Try to move the sliders to see what they do. You can open as many Faust object as you want in the workspace and connect them together. For example, we could add an echo to this harp by choosing LIBRARY/Effects/Echo and connecting it as such:

Now close KISANA and click on the pen button icon on the bottom left corner of ECHO. What you see here is the Faust code associated with the ECHO object. Select it all and delete it! We'll now use this window to write fresh Faust code from scratch!

If you wish to use FaustLive for this tutorial, you should be able to do what's described in the following steps quite easily. Just launch FaustLive and open a new default window. Create a new text file on your system with the .dsp extension and drag-and-drop it in the FaustLive window after writing the Faust code presented below. If you drag-and-drop an empty file, FaustLive will complain. Once a text file is "bound" to FaustLive, every change you make in it will be reflected when the file is saved.

Write the following code in the box (or in the text file bounded to FaustLive):

import("stdfaust.lib");
process = no.noise;

and then click on the enter icon at the bottom left corner of the box. If you're using FaustLive, changes should be reflected whenever you save your .dsp file. You should now hear white noise.

stdfaust.lib gives you access to all the Faust libraries from a single point through a bunch of environments. For instance, we're using here the no environment which stands for noise.lib and the noise function which is the standard white noise generator of Faust. The Faust libraries documentation provides more details about this system.

The most fundamental element of any Faust code is the process line which gives you access to the audio inputs and outputs of your system. This system is completely functional and dynamic and since no.noise only has one output and no input, the ECHO box (which is not an echo anymore) has a single output.

Let's statically scale the output of no.noise simply by multiplying it by a number between 0 and 1:

process = no.noise*0.5;

Thus, standard mathematical operations can be used in Faust just like in any other language.

We'll now connect the noise generator to a resonant lowpass filter by using the Faust connection operator: :

import("stdfaust.lib");
ctFreq = 500;
q = 5;
gain = 1;
process = no.noise : fi.resonlp(ctFreq,q,gain);

fi.resonlp has four arguments (in order): cut-off frequency, q, gain and its input. Here, we're setting the first three arguments with variables. Variables don't have a type in Faust and everything is considered as a signal. The Faust compiler takes care of making the right optimizations by choosing which variable is ran at audio rate, what their types are, etc. Thus, ctFreq, q and gain could well be controlled by oscillators here.

Since the input of the filter is not specified as an argument here (but it could, of course), it automatically becomes an "implicit" input/argument of fi.resonlp. The : connection operator can be used to connect two elements that have the same number of outputs and inputs. Since no.noise has one output and fi.resonlp(ctFreq,q,gain) has one implicit input, we can connect them together. This is essentially the same as writing something like this:

process = fi.resonlp(ctFreq,q,gain,no.noise);

This would work but it's kind of ugly and not very "Faustian", so we don't do it... ;)

At this point, you should be able to use and plug many different elements of the Faust libraries together. The Faust libraries implement hundreds of functions and many of them are not that useful to many people. Fortunately, the Faust libraries documentation contains a section on Standard Faust Libraries listing all the high level "standard" Faust functions organized by types. We recommend you to have a look at it now. As you do this, be aware that implicit signals in Faust can be represented with the _ character. Thus, when you see something like this in the library doc:

_ : aFunction(a,b) : _

it means that this function has one implicit input, one implicit output and two parameters (a and b). On the other hand:

anotherFunction(a,b,c) : _,_

anotherFunction is a function that has three parameters, no implicit input and two outputs.

Just for "fun", try to rewrite the previous example running in the Faust playground so that the process line looks like this:

process = no.noise : _ : fi.resonlp(ctFreq,q,gain) : _;

Of course, this should not affect the result.

You probably noticed that we used the , Faust operator to express two signals in parallel. We can easily turn our filtered noise example into a stereo object using it:

process = no.noise : _ <: fi.resonlp(ctFreq,q,gain),fi.resonlp(ctFreq,q,gain);

or we could even write this in a cleaner way:

filter = fi.resonlp(ctFreq,q,gain)
process = no.noise <: filter,filter;

Since filter,filter is considered here as a full expression, we cannot use the : operator to connect no.noise to the two filters in parallel because filter,filter has two inputs (_,_ : filter,filter : _,_) and no.noise only has one output.

The <: "split operator" used here takes n signals and split them into m signals. The only rule is that m has to be a multiple of n.

The merge :> operator can be used exactly the same way:

process = no.noise <: filter,filter :> _;

Here we split the signal of no.noise into two signals that are connected to two filters in parallel. Finally, we merge the outputs of the filters into one signal. Note, that the previous expression could have been written as such too:

process = no.noise <: filter+filter;

Keep in mind that splitting a signal doesn't mean that its energy get spread in each copy, for example, in the expression:

1 <: _,_

the two _ both contain 1...

Faust allows you display a browsable block diagram of a Faust code, which is very helpful for debugging. Unfortunately, this feature is not available in the Faust playground but it can be accessed in FaustLive by choosing: Window/View SVG Diagram. The diagram of the code presented at the begging of the next section should look like:

All right, it's now time to add a basic user interface to our Faust code and make things controllable...

Building a Simple User Interface

In this section, we'll add a simple user interface to the code that we wrote in the previous section:

import("stdfaust.lib");
ctFreq = 500;
q = 5;
gain = 1;
process = no.noise : fi.resonlp(ctFreq,q,gain) ;

Faust allows to declare basic user interface (UI) elements to control different parameters of a Faust object. Since Faust can be used to make a wide range of elements ranging from standalone applications to audio plug-ins or API, the meaning of UI differs a little bit in function of the target that you decide to use. For example, in the Faust playground or in FaustLive, a UI is a window with various kind of controllers (sliders, buttons, etc.). On the other hand, if you're using Faust to generate an API using faust2api, then UI elements declared in your Faust code will be the parameters visible to "the rest of the world" and controllable through the API.

An exhaustive list of the standard Faust UI elements is given in the Faust quick reference. Be aware that not all of them are supported by all the Faust targets. For example, you wont be able to declare vertical sliders if you're using the Faust playground, etc.

In the current case, we'd like to control the ctFreq, q and gain variables of the previous code with horizontal sliders. To do this, we can write something like:

import("stdfaust.lib");
ctFreq = hslider("cutoffFrequency",500,50,10000,0.01);
q = hslider("q",5,1,30,0.1);
gain = hslider("gain",1,0,1,0.01);
process = no.noise : fi.resonlp(ctFreq,q,gain);

The first argument of hslider is the name of the parameter as it will be displayed in the interface or used in the API (it can be different from the name of the variable associated with the UI element), the next one is the default value, then the min and max values and finally the step. To summarize: hslider("paramName",default,min,max,step).

Let's now add a "gate" button to start and stop the sound (where gate is just the name of the button):

import("stdfaust.lib");
ctFreq = hslider("[0]cutoffFrequency",500,50,10000,0.01);
q = hslider("[1]q",5,1,30,0.1);
gain = hslider("[2]gain",1,0,1,0.01);
t = button("[3]gate");
process = no.noise : fi.resonlp(ctFreq,q,gain)*t;

Notice that we were able to order parameters in the interface by numbering them in the parameter name field using squared brackets.

Faust user interface elements run at a slower rate than the audio rate. Thus, you might have noticed that clicks are generated when moving sliders quickly. This problem can be easily solved by "smoothing" the output of the sliders down using the si.smoo function:

import("stdfaust.lib");
ctFreq = hslider("[0]cutoffFrequency",500,50,10000,0.01) : si.smoo;
q = hslider("[1]q",5,1,30,0.1) : si.smoo;
gain = hslider("[2]gain",1,0,1,0.01) : si.smoo;
t = button("[3]gate") : si.smoo;
process = no.noise : fi.resonlp(ctFreq,q,gain)*t;

Note that we're also using si.smoo on the output of the gate button to apply a exponential envelope on its signal.

This is a very broad introduction to making user interface elements in Faust. You can do much more like creating groups, using knobs, different types of menus, etc. but at least you should be able to make Faust objects at this point that are controllable and sound good.

Final Polishing

Some Faust functions already contain a built-in UI and are ready-to-be-used. These functions are all placed in demo.lib and are accessible through the dm. environment.

As an example, let's add a reverb to our previous code by calling dm.zita_rev1 (high quality feedback delay network based reverb). Since this function has two implicit inputs, we also need to split the output of the filter (otherwise you will get an error because Faust wont know how to connect things):

import("stdfaust.lib");
ctFreq = hslider("[0]cutoffFrequency",500,50,10000,0.01) : si.smoo;
q = hslider("[1]q",5,1,30,0.1) : si.smoo;
gain = hslider("[2]gain",1,0,1,0.01) : si.smoo;
t = button("[3]gate") : si.smoo;
process = no.noise : fi.resonlp(ctFreq,q,gain)*t <: dm.zita_rev1;

Hopefully, you should see many more UI elements in your interface.

That's it folks! At this point you should be able to use Faust standard functions, connect them together and build a simple UI at the top of them.

Some Project Ideas

In this section, we present a couple of project ideas that you could try to implement using Faust standard functions. Also, feel free to check the /examples folder of the Faust repository.

Additive Synthesizer

Make an additive synthesizer using os.osc (sine wave oscillator):

import("stdfaust.lib");
// freqs and gains definitions go here
process = 
    os.osc(freq0)*gain0,
    os.osc(freq2)*gain2 
    :> _ // merging signals here
    <: dm.zita_rev1; // and then splitting them for stereo in

FM Synthesizer

Make a frequency modulation (FM) synthesizer using os.osc (sine wave oscillator):

import("stdfaust.lib");
// carrierFreq, modulatorFreq and index definitions go here
process = 
    os.osc(carrierFreq+os.osc(modulatorFreq)*index)
    <: dm.zita_rev1; // splitting signals for stereo in

Guitar Effect Chain

Make a guitar effect chain:

import("stdfaust.lib");
process = 
    dm.cubicnl_demo : // distortion 
    dm.wah4_demo <: // wah pedal
    dm.phaser2_demo : // stereo phaser 
    dm.compressor_demo : // stereo compressor
    dm.zita_rev1; // stereo reverb

Since we're only using functions from demo.lib here, there's no need to define any UI since it is built-in in the functions that we're calling. Note that the mono output of dm.wah4_demo is split to fit the stereo input of dm.phaser2_demo. The last three effects have the same number of inputs and outputs (2x2) so no need to split or merge them.

String Physical Model Based On a Comb Filter

Make a string physical model based on a feedback comb filter:

import("stdfaust.lib");
// freq, res and gate definitions go here
string(frequency,resonance,trigger) = trigger : ba.impulsify : fi.fb_fcomb(1024,del,1,resonance)
with{
    del = ma.SR/frequency;
};
process = string(freq,res,gate);

Sampling rate is defined in math.lib as SR. We're using it here to compute the length of the delay of the comb filter. with{} is a Faust primitive to attach local variables to a function. So in the current case, del is a local variable of string.

What to Do From Here?


Making Physical Models of Musical Instruments With Faust

This tutorial demonstrates how to use the Faust Physical Modeling ToolKit (FPMTK) to implement physical models of musical instruments in Faust. The goal of this tutorial is not to teach physical modeling, but rather to get coders started with FPMTK, even though it should be in the reach of anyone with some Faust background.

The main goal of the FPMTK is to make the design and prototyping of physical models an easy task. Models presented in this tutorial are assembled in a very high level way and are often not an exact reproduction of their "real world" counterparts. On the contrary, we favor the creation of novel instruments here rather than making completely accurate models of existing acoustic instruments. Also, the physical modeling library part of the FPMTK is meant to grow so if you can't find what you're looking for (and you probably often will), let us know, and we'll do our best to implement it. Similarly, if you implemented elements that could be added to the library, send them over our way and we'll add them for you...

If you have no idea what Faust is and just ended up here, you should at least read this section on what is Faust first. If you'd like to know more about physical modeling of musical instruments in general, we recommend you to check Julius Smith's online book on Physical Audio Signal Processing. We'll try to link the elements studied in this tutorial to this resource as much as possible.

Getting Started With the Faust Physical Modeling Library

The Faust physical modeling library (physmodels.lib) is part of the Faust distribution and should come with the latest version of Faust (you wont have it unless your version of Faust was built after April 28, 2017). Its functions are documented in the standard Faust libraries documentation. It is organized in several sections (see this section) ranging from simple conversion tools to ready-to-use models with built-in UI. All the models implemented in this library are either using waveguide modeling or modal synthesis (i.e., no mass/spring models, etc.). It is completely modular, and objects can be easily combined and connected to each other.

In this section, we demonstrate how to use the Faust physical modeling library to create models of existing or completely new instruments. We assume that you have some background in acoustics, digital signal processing, and basic notions of physical modeling.

Bidirectional Block-Diagram Semantic

In order to be modular, instrument parts implemented in physmodels.lib (e.g., tubes, mouthpieces, etc.) must be bidirectional. Take an acoustic guitar for example, the signal produced by the strings is transmitted to the body through the bridge, but the opposite is also true: signals from body are transmitted to the strings through the bridge, etc. Sound waves don't go in just one direction...

This is even more true for wind instruments where coupling between the different parts of the instrument is very strong. On a clarinet, pressure waves generated by the performer and the reed travel across the tube and are reflected at its end and sent back to the mouthpiece, etc.

The block-diagram oriented syntax of Faust allows to easily create chains of blocks going form left to right using the : operator and from right to left using ~. However, since ~ creates a feedback signal, it can't really be used to create bi-directional blocks directly.

"High level elements" of the Faust physical modeling library are all based on a series of functions allowing to implement algorithm by using a bi-directional block diagram approach. Each "block" has 3 inputs and 3 outputs and can be used within a chain (note that any function in physmodels.lib can be called using the pm prefix if stdfaust.lib is imported). The first input and output correspond to left-going waves, the second input and output carry right-going waves, and the third input and output can be used to send a signal to the end of the algorithm. For example, the pick-up in an electric guitar is usually placed somewhere at the middle of a string. So in that case, it is helpful to retrieve a signal somewhere within a chain, etc. We'll give more advanced examples of that later.

An "empty" block can be as simple as:

simpleBlock = _,_,_;

A simple "gain scaling block" could look like:

gainBlock(g) = *(g),*(g),_;

In that case, g is multiplied both to left and right going waves.

To connect these 2 blocks together, the chain function must be used:

foo = chain(simpleBlock : gainBlock(g) : simpleBlock);

Note the use of : within chain, which is used here as a "bidirectional" connector. Unfortunately, faust2svg is not capable yet of interpreting this as a bidirectional chain, so the corresponding svg diagram will look more complicated than it should... Instead, here's a "hand-made" graphical representation of the previous expression:

You can see that chain essentially reverses the orientation of the second input and output of each block without "ending" the chain. Thus foo here becomes a new "bidirectional" block that can in turn be used within a chain, etc.

The main limitation of this system is that each block within a chain will induce a one sample delay in both directions (the output "channel" is not affected). This is due to the implicit one sample delay created by the ~ operator applied to left going waves for each block in the chain. To keep blocks balanced, a one sample delay is added to right going waves.

Let's now look at the implementation of a "more useful" block of physmodels.lib:

waveguideFd4(nMax,n) = par(i,2,de.fdelay4(nMax,n)),_;
waveguide(nMax,n) = waveguideFd4(nMax,n);

So, a waveguide is just two delay lines in parallel and an "empty" channel for a potential output signal. By default, waveguide uses 4th order Lagrange interpolation fractional delays, but this can be changed.

Now if we keep going one level up, we can look at the implementation of a "string segment" (stringSegment):

stringSegment(maxLength,length) = waveguide(nMax,n)
with{
    nMax = maxLength : l2s;
    n = length : l2s/2;
};

which is essentially just a waveguide whose length can be controlled as a size in meters instead of samples. Note that the implementation of delays in Faust forces us to specify a maximum string size here.

Now, any model needs to somehow resonate and for that, we need to connect right-going waves to left-going waves or vice versa. This can be easily done using the lTermination and the rTermination functions:

In both cases, b is a bidirectional block (it could be a chain, of course) and a is function with one input and one output.

"Ideal" rigid string terminations (lossless) can then be easily implemented using these 2 functions:

rStringRigidTermination = rTermination(basicBlock,*(-1));
lStringRigidTermination = lTermination(*(-1),basicBlock);

where *(-1) implements a lossless reflexion with a phase inversion. rStringRigidTermination and lStringRigidTermination are both part of physmodels.lib.

Signals can be "injected" anywhere within a chain by using the in function:

foo = chain(stringSegment : in(x) : stringSegment);

where x is a signal to insert between the two stringSegments. Typically, the length of one string segment could be changed in function of the other to control the position of excitation on the virtual string.

out works the same way as in and bridges the signal of left-going and right-going waves at a specific location in a chain to the output channel:

foo = chain(stringSegment : in(x) : stringSegment : out : stringSegment);

In that case, out can be viewed as a pickup on an electric guitar string whose location can be controlled independently from the excitation position.

Finally, endChain can be used to "terminate" a chain simply by blocking the 3 inputs of a block and its first 2 outputs (the third output probably contains the output signal):

foo = endChain(chain(A : B)) : _;

Simple Virtual String Example

We now have enough elements to implement an "ideal string" with lossless terminations (in other words, it will vibrate forever):

idealString(length,pluckPosition,excitation) = endChain(wg)
with{
    maxStringLength = 3;
    lengthTuning = 0.08; // adjusted by hand
    tunedLength = length-lengthTuning;
    nUp = tunedLength*pluckPosition; // upper string segment length
    nDown = tunedLength*(1-pluckPosition); // lower string segment length
    wg = chain(lStringRigidTermination : stringSegment(maxStringLength,nUp) :
    in(excitation) : out : stringSegment(maxStringLength,nDown) :
    rStringRigidTermination); // waveguide chain
};

Since each element in a chain adds a one sample delay in two directions, the tuning of this string is wrong and needs to be adjusted "by hand" (see lengthTuning, which is specified in meters here). This is a well-known issue in waveguide physical modeling that is not specific to the use of our chain function. Indeed, adding a filter to a waveguide loop will be enough to detune the string or the tube that it is implementing since all filters add delays to the signal they process.

idealString here allows to control the position of the pickup and the plucking position by modulating the length of the two string segments in function of each other.

Obviously, this string model will not sound very good since energy within the string is never dissipated. Typically, the simple phase inversion reflexion implemented in lStringRigidTermination and rStringRigidTermination would be replaced by some kind of lowpass filter, etc.

The following section gives an overview of the higher level elements implemented in physmodels.lib and demonstrates how to use them.

Overview of the Models Implemented in physmodels.lib

physmodels.lib contains a wide range of higher level elements compared to the one that we studied so far (e.g., clarinet mouthpiece, violin bridge, etc.) as well as complete "ready-to-use" models of instruments. This section gives an overview of the global organization of the library and how to use its various models.

Functions With UI and MIDI Support

The "highest level functions" of physmodels.lib all end with _ui_MIDI and implement a graphical user interface compatible with the Faust MIDI standards (they can be controlled with a MIDI keyboard). All these functions are called by one of the examples available in the Faust distribution in /examples/physicalModeling, that should more or less have the same name than the function they call. For example, guitar_ui_MIDI is a MIDI controllable acoustic guitar model that can just be called on its own:

import("stdfaust.lib");
process = guitar_ui_MIDI;

One level below functions ending with _ui_MIDI are functions ending just with _ui. These functions also have their own UI built-in but are not controllable with MIDI note events. Indeed, in many cases, it might be preferable to control some physical models exclusively with continuous parameters. For example, the clarinet model implemented in physmodels.lib has both a clarinet_ui_MIDI and a clarinet_ui function associated to it. In clarinet_ui, the pressure, the mouth position on the mouthpiece, the length of the tube, etc. can be controlled as continuous parameters, while in clarinet_ui_MIDI, these 3 parameters are formatted and controlled by an envelope generator controlled itself by "MIDI parameters" (e.g., frequency, gain, and note on/off).

MIDI functions can also be turned into polyphonic instruments using the Faust polyphony system (e.g., using the -nvoices option during compilation). While this makes sense for specific classes of instruments such as the clarinet (i.e., one key = one instrument), this might break the "physical coherence" for other types of instruments. Take a violin for example, it is a polyphonic instrument because of its four strings. A simple violin model could consist of a one string violin made polyphonic using the system described above (violin_ui_MIDI implements such a model). Even if this works, we'll loose some of the features of instrument by doing this such as sympathetic resonances between the strings or the gestural limitations induced by the use of a single bow to drive multiple strings. Thus, an accurate violin model implementing four strings is much more complex to control with MIDI than a one string "polyphonic" one. Furthermore, just like in "the real world," realistic sounds will only be achieved by adjusting the different parameters of the model in function of each other during the performance (e.g., the velocity of the bow, the pressure of the bow on a string, etc.). The Faust Physical Modeling ToolKit doesn't implement such "sophisticated" control system, and it is the programmer's responsibility to provide accurate parameter values to the model. All that to say that for advanced models such as the violin one described above, there's no point writing a "Faust MIDI function" since only a continuous control of all the parameters of the model will allow to provide satisfying results. The control of physical models is a pretty wide topic that wont be treated any further in this tutorial.

Some lower level elements that don't implement complete models also have a built-in UI and can be combined together. For example, clarinet_ui is made of a "blower" function connected to a clarinet model:

clarinet_ui = hgroup("clarinet",blower_ui : clarinetModel_ui);

Organization of the Library

physmodels.lib implements a wide range of instrument parts and complete models organized in the following categories:

At this point, we just recommend you to explore the content of physmodels.lib and to look at the implementation of its functions to understand how it works/get inspired.

Investigating a Few Models

Karplus-Strong

Aaaah, the "old good" Karplus-Strong (KS) can be found in physmodels as ks and its implementation looks like:

ks(length,damping,excitation) = endChain(ksChain)
with{
    maxStringLength = maxLength;
    lengthTuning = 0.05;
    tunedLength = length-lengthTuning;
    refCoef = (1-damping)*0.2+0.8;
    refFilter = ksReflexionFilter*refCoef;
    ksChain = terminations(_,chain(in(excitation) :
    stringSegment(maxStringLength,tunedLength) : out),refFilter);
};

Since, this simple string physical model is in 1D, only one reflexion filter is used to absorb energy on both terminations. ksReflexionFilter is a "typical" one zero KS filter. Finally, you can see that the "pickup" is placed near the bridge here...

Related UI and MIDI functions: ks_ui_MIDI.

Electric Guitar

elecGuitarModel implements a simple electric guitar model without excitation generator and audio effects:

elecGuitarModel(length,pluckPosition,excitation) = endChain(egChain)
with{
  maxStringLength = maxLength;
    lengthTuning = 0.11; // tuned "by hand"
    stringL = length-lengthTuning;
  egChain = chain(elecGuitarNuts :
    openStringPick(stringL,0.05,pluckPosition,excitation) : elecGuitarBridge);
};

The model has only one string and is easy to turn into a MIDI controllable instrument. openStringPick is configured here as a steel string and allows to choose the pickup position independently from the excitation position. elecGuitarNuts and elecGuitarBridge are just reflexion filters. The pitch of the string is changed by changing its length (no finger model).

Related UI and MIDI functions: elecGuitar_ui_MIDI.

More on electric guitar modeling.

Acoustic Guitar

guitarModel implements an acoustic guitar model with a single steel string (the nylon strings version is nylonGuitarModel):

guitarModel(length,pluckPosition,excitation) = endChain(egChain)
with{
  maxStringLength = maxLength;
    lengthTuning = 0.1; // tuned "by hand"
    stringL = length-lengthTuning;
  egChain = chain(guitarNuts : steelString(stringL,pluckPosition,excitation) :
    guitarBridge : guitarBody : out);
};

guitarBridge implements a reflectance and a transmittance model to propagate energy to the body which is implemented here with a simple filter (guitarBody).

Related UI and MIDI functions: guitar_ui_MIDI.

More on acoustic guitar modeling.

Violin Model

violinModel implements a single string bowed string instrument physical model that can be used as a violin, a cello, a viola, etc.:

violinModel(stringLength,bowPressure,bowVelocity,bowPosition) = endChain(modelChain)
with{
    stringTuning = 0.08;
    stringL = stringLength-stringTuning;
    modelChain = chain(
        violinNuts :
        violinBowedString(stringL,bowPressure,bowVelocity,bowPosition) :
        violinBridge :
        violinBody :
        out
    );
};

violinBowedString implements the bow as well as its interaction with the string. Just like for the simple guitar models presented previously, pitch is changed by modifying the length of the string (no finger model).

Related UI and MIDI functions: violin_ui and violin_ui_MIDI.

More on bowed string instruments modeling.

Clarinet Model

clarinetModel is a simple clarinet model based on a tube with a mouthpiece and a "bell" connected on its two sides:

clarinetModel(tubeLength,pressure,reedStiffness,bellOpening) = endChain(modelChain)
with{
    lengthTuning = 0.05; // empirical adjustment of the tuning of the tube
    maxTubeLength = maxLength;
    tunedLength = tubeLength/2-lengthTuning; // not really sure why we had to shift octave here
    modelChain =
        chain(
            clarinetMouthPiece(reedStiffness,pressure) :
            openTube(maxTubeLength,tunedLength) :
            wBell(bellOpening) : out
        );
};

Just like for all the other models presented above, pitch is changed by modifying the length of the tube (no tone holes model). clarinetMouthPiece takes care of implementing the mouthpiece as well as its interaction with the rest of the instrument.

Related UI and MIDI functions: clarinet_ui and clarinet_ui_MIDI.

More on woodwind instruments modeling.

Brass and Flute

The brass (brassModel) and flute (fluteModel) work more or less the same way than the clarinet model:

brassModel(tubeLength,lipsTension,mute,pressure) = endChain(brassChain)
with{
  maxTubeLength = maxLength;
  lengthTuning = 0; // Not that important for that one because of lips tension
  tunedLength = tubeLength + lengthTuning;
  brassChain = chain(brassLips(tunedLength,lipsTension,pressure) : openTube(maxTubeLength,tunedLength) : wBell(mute) : out);
};
fluteModel(tubeLength,mouthPosition,pressure) = endChain(fluteChain) : fi.dcblocker
with{
  maxTubeLength = maxLength;
  tubeTuning = 0.27; // set "by hand"
  tLength = tubeLength+tubeTuning; // global tube length
  embouchurePos = 0.27 + (mouthPosition-0.5)*0.4; // position of the embouchure on the tube
  tted = tLength*embouchurePos; // head to embouchure distance
  eted = tLength*(1-embouchurePos); // embouchure to foot distance
  fluteChain = chain(fluteHead : openTube(maxTubeLength,tted) : fluteEmbouchure(pressure) : openTube(maxTubeLength,eted) : fluteFoot : out);
};

Related UI and MIDI functions: brass_ui, brass_ui_MIDI, flute_ui, and flute_ui_MIDI.

More on brass modeling and flute modeling.

Making Custom Elements Using mesh2faust

Making "standard" custom physical modeling elements compatible with physmodels.lib is easy. However, modeling more complex structures such as guitar bodies, etc. is more complicated since they can't be accurately implemented with simple filters or basic waveguides systems. In this section, we demonstrate how to turn 3D graphical representation of an instrument part into a physical model compatible with physmodels.lib using mesh2faust. mesh2faust generates modal physical models and can only be used to model linear objects (e.g., modeling a gong wont be possible with this system).

In this section, we'll make a marimba physical model by modeling a marimba tone bar using open source CAD softwares and mesh2faust, as well as elements in physmodels.lib such as tubes, etc. We'll also briefly talk about other models that can be fully implemented just using mesh2faust such as bells.

All the tools presented in this section are open source and should be freely available.

3D Models of Instrument Parts

The first thing we need to do is to draw a 3D part to turn it into a physical model. While any CAD software can be used for that (e.g., SolidWorks, Rhino, AutoCAD, etc.), the solution presented here is completely Open Source and should work on any platform.

OpenSCAD is a CAD software where 3D shapes are created through a high level functional programming language. If you haven't done so already, download OpenSCAD and install it on your system. We wont explain how to use OpenSCAD in this tutorial but there are many online resources that you can use to learn more about this topic.

While OpenSCAD is great to draw simple shapes, making more complex parts (such as a violin body :) ) can become quite complicated. The Faust distribution now comes with a new tool called inkscape2scad (that can be found in /tools/physicalModeling/inkscape2scad) that allows to export 2D drawings made in Inkscape to OpenSCAD.

If you don't already have Inkscape on your system, get it now and install inkscape2scad by following the instructions in /tools/physicalModeling/inkscape2scad/README.md. This should very straight forward.

We now want to make a 3D CAD model of a marimba tone bar using Inkscape and OpenScad. Keep in mind that everything we do with physical modeling will always be just an approximation of reality, so we'll take some liberties to reach our goal for the few next steps of this tutorial. Thus, we'll just draw a marimba tone bar cross-section in 2D, using a bezier curve in Inskscape and extrude it in OpenSCAD.

The following path was drawn by importing the picture of a C2 marimba tone bar in Inkscape and drawing a Bezier curve at the top of it:

The picture above in an SVG file that you can download on your computer and use for the following steps of this tutorial.

After drawing the part you want to extrude, you must make sure that its dimensions are correct (i.e., select the path that you just drew and adjust its size). It should be 440mmx22.18mm here.

After this, go in Extensions/Generate from Path/Paths to OpenSCAD. In the window that just popped-up, adjust the module name (e.g., marimba) and output path. Choose "Linear" as the type of extrusion and set the "Linear Extrude Height" to 50mm which corresponds to the depth of our marimba tone bar here. Choose "0" for "Smoothing" to generate a CAD model with the highest resolution, and finally click on "Apply".

Open the generated file in OpenSCAD and render it, you should now have a "beautiful" C2 marimba tone bar 3D geometry:

Make sure that it looks right and export it as an .stl file.

If this didn't work, you can get all the files of this tutorial in libraries/modalmodels/marimbaBar in the Faust distribution. Other modal models can be found in the modalmodels folder.

A similar approach can be taken for other types of objects. For example, bell cross-sections can be drawn in Inkscape and then exported to OpenSCAD using the "Rotate Extrude" option of inkscape2scad. Several bell models were made using this technique and are presented on this page. They are also available in physmodels.lib as functions (just do a search for "bells") and in libraries/modalmodels in the Faust distribution.

Once again, any CAD software can be used for this step. While the open source solution we presented here will be sufficient for many cases, it has some limitations, and modeling more complex objects such as a violin body might be more challenging.

Finally, any .stl file available on the web can be turned into a physical model and finding pre-made 3D models of various instrument parts should be straight forward.

Meshing

In order to convert the tone bar made in the previous section into a physical model, it must be turned into a volumetric mesh that will be compatible with the finite element analysis carried out by mesh2faust.

In practice, .stl files already represent the object they contain as a mesh (a succession of triangular faces). The problem of that type of mesh is that it is highly optimized and that flat surfaces will be represented by as few triangles as possible. Thus if we plot the vertices of the mesh of the tone bar contained in the .stl file generated in the previous section, it should look like this:

As you can see, most of the triangles are concentrated in the curvy part of the model which requires a higher resolution while the "top plate" is only represented by 2 triangles. In order for the finite element analysis to provide good results, this mesh needs to be reorganized to make sure that triangles are somewhat uniformly distributed on the model. All the complexity of meshing for FEM lies in creating a mesh that is as uniform as possible without altering too much the shape of the object.

We'll use MeshLab for the next steps of this tutorial. MeshLab is Open Source and you should install it on your system now.

Open the .stl file generated with OpenSCAD in the previous section and switch to wideframe mode to see the vertices of the mesh. The easiest way to remesh our model with equally distributed triangles is to use the "Uniform Mesh Resampling" function of MeshLab that can be found in "Filters/Remeshing, Simplification and Reconstruction". The "Precision" box allows to specify the target size of triangles in the new mesh, so the smaller their size, the more triangles, and the highest the resolution of the new mesh. The density of the mesh will have a significant impact on the duration of the FEM analysis that will exponentially increase with the density of the mesh. To give you an idea, analyzing a mesh with ~3E4 faces will probably take about 15 minutes on a regular laptop. Here, we'll choose a size of 3.6 "world unit" for the "precision" parameter of the resampling and then run the filter. The new mesh should have approximately 9150 faces and was created as a new layer in MeshLab. You can get rid of the old mesh by clicking on "View/Show Layer Dialog" and deleting it there. The new mesh should look like a bridge designed by Gustave Eiffel:

Selecting a smaller value for the "precision" parameter would yield a higher density mesh, but the current one is good enough for what we're trying to achieve here.

Other remeshing techniques might be used to re-organize meshes generated by OpenSCAD. For example, very high definitions meshes might be "down sampled" by performing a "Quadric Collapse Edge Decimation" (also available in "Filters/Remeshing, Simplification and Reconstruction"). For example, this technique was used to make the bell meshes presented here. The strategy in that case was to export very high density meshes (>3E5 faces) with OpenSCAD and to down-sample them to 3E4 faces.

A Laplacian smooth (available in "Filters/Smoothing, Fairing and Deformation") might also help further uniforming meshes.

There is no "secret recipe" for meshing and it's kind of up to you to find the best compromise between density and quality of the model...

Before you export your mesh, you must scale it so that its internal dimensions are in meters (by default OpenSCAD uses millimeters but mesh2faust expect a mesh in meters). This can be easily done in MeshLab by going in "Filters/Normals, Curvatures and Orientation/Transform: Scale". From here, make sure that "Uniform Scaling" is selected and enter 0.001 in any of the axis boxes and apply the filter.

Finally, export your mesh as a .obj file using the default parameters. You're now ready to turn your 3D object into a Faust modal physical model!

Converting a Mesh to a Faust Physical Model

Before going any further, you should install mesh2faust (which is part of the Faust distribution in tools/physicalModeling/mesh2faust). This might get a bit tricky in some cases so if you encounter any issue, feel free to contact me.

To install mesh2faust, follow these instructions.

As explained in the previous link, mesh2faust carries out a finite element analysis on the mesh provided to it. The result of this analysis is converted into modal data (list of frequencies and gains) that are embedded in a modal physical model. More details about this are available in this paper.

A good configuration for mesh2faust in our case here could look like (the role of each flag used here is detailed in the mesh2faust documentation):

mesh2faust --infile marimbaBar.obj --nsynthmodes 50 --nfemmodes 200
--maxmode 15000 --expos 2831 3208 3624 3975 4403 --freqcontrol
--material 1.3E9 0.33 720 --name marimbaBarModel

marimbaBar.obj contains the mesh to be analyzed. We're generating a model with a maximum number of 50 modes and we make sure that the highest mode doesn't exceed 15KHz. We configure the properties of the material to be the same as that of rosewood (which is typically used for marimba tone bars). We call the generated model "marimbaBarModel" and require the fundamental frequency of the generated model to be controllable (--freqcontrol). Finally, a series of vertices IDs are provided (--expos) to set the various positions of excitation on the model. The ID of a vertex can be easily retrieved in MeshLab by selecting "Get Info" and clicking on one the triangle of the mesh. This should display the IDs of the selected triangle:

The vertex ID is the number following "vn:". Limiting the number of excitation positions is crucial since by default there are as many positions as there are vertices in the mesh. Since the gain of each mode is different for each position, a Faust model containing all positions will have numberOfModes*numberOfExcitationPositions gain values. It's also a good way to organize excitation positions in a coherent way, understandable by the user (vertex IDs are kind of randomly distributed across the mesh).

After running mesh2faust, the generated Faust model (that can also be found in physmodels.lib) should look like this:

marimbaBarModel(freq,exPos,t60,t60DecayRatio,t60DecaySlope) = _ <: par(i,nModes,modeFilter(modesFreqs(i),modesT60s(i),modesGains(int(exPos),i))) :> /(nModes)
with{
nModes = 50;
nExPos = 5;
modesFreqRatios(n) = ba.take(n+1,(1,3.31356,3.83469,8.06313,9.44778,14.1169,18.384,21.0102,26.1775,28.9944,37.0728,37.8703,40.0634,47.6439,51.019,52.43,58.286,63.5486,65.3628,66.9587,74.5301,78.692,80.8375,89.978,92.9661,95.1914,97.4807,110.62,112.069,113.826,119.356,127.045,129.982,132.259,133.477,144.549,149.438,152.033,153.166,155.597,158.183,168.105,171.863,174.464,178.937,181.482,185.398,190.369,192.19,195.505));
modesFreqs(i) = freq*modesFreqRatios(i);
modesGains(p,n) = waveform{1,0.776725,0.625723,0.855223,0.760159,0.698373,0.768011,0.641127,0.244034,0.707754,0.634013,0.247527,0.660849,0.450396,0.567783,0.106361,0.716814,0.66392,0.291208,0.310599,0.801495,0.635292,0.307435,0.874124,0.497668,0.487088,0.459115,0.733455,0.541818,0.441318,0.31392,0.40309,0.685353,0.60314,0.400552,0.453511,0.634386,0.291547,0.131605,0.368507,0.839907,0.60216,0.288296,0.57967,0.0242493,0.262746,0.368588,0.890284,0.408963,0.556072,0.884427,0.83211,0.612015,0.757176,0.919477,1,0.827963,0.89241,0.0357408,0.480789,0.752872,0.0546301,0.235937,0.362938,0.444472,0.101751,0.703418,0.453136,0.316629,0.490394,0.982508,0.551622,0.602009,0.666957,0.77683,0.905662,0.0987197,0.402968,0.829452,0.307645,0.64048,0.983971,0.584205,0.650365,0.334447,0.58357,0.540191,0.672534,0.245712,0.687298,0.883058,0.79295,0.600619,0.572682,0.122612,0.388248,0.290658,0.380255,0.290967,0.567819,0.0737721,0.42099,0.0786578,0.393995,0.268983,0.260614,0.494086,0.238026,0.0987824,0.277879,0.440563,0.0770212,0.450591,0.128137,0.0368275,0.128699,0.329605,0.374512,0.36359,0.272594,0.379052,0.305241,0.0741129,0.345728,0.29935,0.221284,0.0261391,0.293202,0.361885,0.11433,0.239005,0.434156,0.329583,0.21946,0.284175,0.198555,0.431976,0.302985,1,0.146221,0.140701,0.264243,0.185997,0.426322,0.30478,0.34399,0.19543,0.386955,0.1876,0.172812,0.0434115,0.303761,0.069454,0.453943,0.832451,0.317817,0.940601,1,0.180658,0.737921,0.832297,0.402352,0.126786,0.594398,0.485455,0.32447,0.365102,0.777922,0.588272,0.401353,0.610735,0.158693,0.0746072,0.825099,0.925459,0.65377,0.260792,0.719384,0.559908,0.37259,0.360035,0.622939,0.210271,0.444595,0.311286,0.464309,0.557231,0.52408,0.0701056,0.320749,0.19446,0.727609,0.522062,0.394004,0.235035,0.395646,0.494796,0.517318,0.109752,0.692849,0.00632009,0.0207583,0.00306107,0.0637191,0.081661,0.03511,0.127814,0.202294,0.0764145,0.263127,0.400199,0.267278,0.633385,1,0.739902,0.413763,0.41811,0.612715,0.672374,0.339674,0.21172,0.459645,0.1025,0.32589,0.148154,0.265442,0.0974305,0.286438,0.275213,0.109111,0.575089,0.370283,0.29411,0.259826,0.0648719,0.583418,0.282663,0.182004,0.117421,0.417727,0.16965,0.24853,0.122819,0.185486,0.0433618,0.373849,0.252768,0.195103,0.0927835,0.166543},int(p*nModes+n) : rdtable : select2(modesFreqs(n)<(ma.SR/2-1),0);
modesT60s(i) = t60*pow(1-(modesFreqRatios(i)/195.955)*t60DecayRatio,t60DecaySlope);
};

As explained in the related ICMC paper, modes are implemented using a bank of resonant bandpass filters, allowing the model to be driven by any signal. The modesGains array contain the mode gains for the 5 selected excitation positions, thus the range of expos is 0-4. The way this function works is detailed in the mesh2faust documentation.

Since modes t60s are not computed by mes2faust we need to set t60, t60DecayRatio, and t60DecaySlope by hand here. We found that good values for a C2 marimba tone bar are:

t60 = 0.1; // seconds
t60DecayRatio = 1;
t60DecaySlope = 5;

Our tone bar model can easily be tested by writing a simple Faust code such as:

import("stdfaust.lib");
import("marimbaBarModel.lib");

toneBar = marimbaBarModel(freq,exPos,t60,t60DecayRatio,t60DecaySlope)
with{
  freq = 65.4;
  exPos = 0;
  t60 = 0.1;
  t60DecayRatio = 1;
  t60DecaySlope = 5;
};

excitation = button("gate") : impulseExcitation;

process = excitation : toneBar <: _,_;

We chose a frequency of 65.4Hz which corresponds to C2. Since mode frequencies are expressed as ratio to the fundamental in our model, we could "theoretically" use it for any tone bar of our marimba model. Unfortunately, things are a bit more complicated than that :)... Indeed, while mode frequencies translate very well, modes gains don't. Additionally, the shape of marimba tone bars changes in function of their size. For the sake of time, we wont model any other tone bar in this tutorial, but we'll see that our model only produces realistic sound if it is played no more than an octave above C2. A "middle ground" could be to have a different tone bar model for each octave, but we wont do this here.

This tone bar should now be connected to a tube to complete the model. This can be easily done by using physmodels.lib. A simple marimba resonant tube can be implemented as (marimbaResTube is available in physmodels.lib):

marimbaResTube(tubeLength,excitation) = endChain(tubeChain)
with{
    maxTubeLength = maxLength;
    lengthTuning = 0.04;
    tunedLength = tubeLength-lengthTuning;
    endTubeReflexion = si.smooth(0.95)*0.99;
    tubeChain =
        chain(
            in(excitation) :
            terminations(endTubeReflexion,
                openTube(maxTubeLength,tunedLength),
                endTubeReflexion) :
            out
        );
};

End of the tube reflexions are modeled by a pretty heavy lowpass filter (si.smooth(0.95)). Energy is introduced on one side of it and the output of the system is placed on the other.

Finally, marimbaBarModel and marimbaResTube just need to be connected to each other (marimbaModel is also available in physmodels.lib):

marimbaModel(freq,exPos) =
marimbaBarModel(freq,exPos,maxT60,T60Decay,T60Slope) : marimbaResTube(resTubeLength)
with{
    resTubeLength = freq : f2l;
    maxT60 = 0.1;
    T60Decay = 1;
    T60Slope = 5;
};

marimba_ui_MIDI adds a user interface and an excitation generator to this model and is called by the marimbaMIDI.dsp example available in examples/physicalModeling in the Faust distribution.

Adding Faust Real-Time Audio Support to Android Apps

Introduction

In this tutorial, we demonstrate how to use faust2api to add Faust based real-time audio support to any Android application. While we'll make an Android app from scratch here, the steps should be very similar for any existing app.

The very high level JAVA API generated by faust2api allows to interact with any Faust C++ native synthesizer/sound processor in a simple way. This system takes care of everything related to audio and that might be useful to turn a smartphone into a musical instrument: audio synthesis/processing, instantiating the audio engines, retrieving sensor data, parse MIDI messages, polyphony handling, etc.

While it won't have any pretty user interface, the Android app we're about to make here will implement a MIDI controllable polyphonic synthesizer connected to a small effects chain.

We assume that the Faust distribution is already installed on your system and that you have some basic knowledge of Faust. If you don't, we recommend you to read the Faust Hero in 2 Hours Tutorial.

Additionally, we assume that your Android development tool-chain is already functional and that you have the latest version of the Android SDK (API Level 25, as of Dec. 3, 2016) and NDK (13.1 as of Dec. 3, 2016). If it is not the case, we recommend you to read these instructions. Finally, you should already have some experience making Android apps before doing this tutorial.

QUICK NOTE ON AUDIO LATENCY: As some of you might know, Android is pretty infamous in the world of audio because of its high audio latency. Google folks have been working hard the last couple of years to fix this problem and they managed about a year ago to achieve descent latency performances for real-time audio. All the things that are presented here have been tested on a Nexus 9 and it's pretty good! While we haven't really measured the "round-trip" latency, we believe that it is below 20ms.

ANOTHER QUICK NOTE: for better performances, we strongly encourage you to use the latest version of the Android SDK. MIDI support has only been added to the Android SDK at API 23 therefore, the app we're about to make here will only run on about 10% of the devices on the market (not our fault...). Finally, all these tools are still in "alpha testing" so it is likely that something will not go well at some point :). If it does, let me know... Oh! and this tutorial is for Linux and OSX only, sorry Windows...

The source code of the first part of this tutorial can be downloaded here.

Creating an App With Basic Faust Audio Support

First, create a new application and "include C++ support" (while the Faust object will run natively on the C++ side, you will be able to control it in JAVA):

After this, select "Phone and Tablet". If you plan to add MIDI support to your device, you will have to choose API 23 as the minimum SDK. If you don't plan to use MIDI, then API 14 should do the job.

Then create and "Empty Activity" using the default parameters for the activity and layout name (MainActivity, etc.). Choose the default parameters for the C++ support. Click on finish and make sure that the generated app runs on your device.

For the next step, you will need to have the LATEST version of Faust installed on your system. For this, just clone our git repository, then run cd faudiostream-code in a terminal and the usual:

make
sudo make install

Faust has no dependencies (beside C++...), so this should go very smoothly. Faust comes with a wide range of libraries that are now installed on your system (check this page for an exhaustive list). Let's use some of their elements to implement a simple synthesizer. Create a new text file and fill it with the following code:

import("stdfaust.lib");
freq = nentry("freq",200,40,2000,0.01) : si.smoo;
gain = nentry("gain",1,0,1,0.01) : si.smoo;
gate = button("gate") : si.smoo; 
cutoff = nentry("cutoff[midi:ctrl 1]",5000,40,8000,0.01) : si.smoo;
q = nentry("q[midi:ctrl 2]",5,1,50,0.01) : si.smoo;
process = vgroup("synth",os.sawtooth(freq)*gain*gate : fi.resonlp(cutoff,q,1) <: _,_);

and then save it as simpleSynth.dsp. This is just an alias-free sawtooth connected to a resonant lowpass filter. The frequency (freq), the gain (gain), the on/off parameter (gate) and the cut-off frequency (cutoff) are associated to UI elements here in order to control them in our Android app in JAVA. In practice, these UI elements will never be created but we will use their name to access the different parameters in our API. The Faust compiler will basically build a parameter names tree based on what was declared in the Faust code. For instance, the following parameters will be accessible:

/synth/freq
/synth/gain
/synth/gate
/synth/cutoff
/synth/q

More information will be given about this type of things in the following steps, so for now, you'll just have a trust us ;).

The different si.smoo are just integrators (normalized lowpass with a pole at 0.999) used to smooth the parameters to prevent clicking. We will see later that that si.smoo is also used on the gate parameter as a simple exponential envelope generator. <: _,_ splits the output of the lowpass filter into 2 different signals. Our Faust object would be valid without it but there would be audio only on one channel. Once again, if you have no clue of what we're talking about here, read our Faust Hero in 2 Hours Tutorial.

Let's now turn this Faust code into an Android API simply by running the following command in a terminal:

faust2api -android simpleSynth.dsp

This should create a zip file called dsp-faust.zip containing all the source files needed to embed our Faust object in the Android app as well as a markdown documentation specific to that object (README.md) that we strongly recommend you to read now. Additionally, we encourage you to check the faust2api documentation at this point.

/java contains the JAVA portion of the API and /cpp, its C++ portion. While the JAVA package associated with this API is com.DspFaust, it can be changed when calling faust2api using the -package option.

Create a folder path /app/src/main/java/com/DspFaust in your app and copy and paste the content of dsp-faust/java in it (as usual, the package of JAVA classes must follow the path where JAVA files are placed, etc.). Similarly, copy the content of dsp-faust/cpp in app/src/main/cpp.

In Android Studio, in your application tree on the left, open External Build Files/CMakeLists.txt and edit it to make it took like this (make all the necessary adjustments, of course):

cmake_minimum_required(VERSION 3.4.1)
add_library( dsp_faust SHARED src/main/cpp/java_interface_wrap.cpp src/main/cpp/DspFaust.cpp )
find_library( log-lib log )
target_link_libraries( dsp_faust ${log-lib} )

You can get rid of all the comments of course...

Now open Gradle Scripts/build.gradle (Module: app) and edit the cppFlags as such:

cppFlags "-O3 -fexceptions -frtti -lOpenSLES"

After this, re-sync gradle. Hopefully, you didn't get any error :). At this point, your app tree should look like this:

You're now ready to use the Faust DSP object in your app! Modify MainActivity.java (java/com/example...) to make it look like this (the package name is probably different in your case, of course):

package com.example.romain.faustinstrument;

import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import com.DspFaust.DspFaust;

public class MainActivity extends AppCompatActivity {
    DspFaust dspFaust;
    
    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);     
        int SR = 48000;
        int blockSize = 128;
        dspFaust = new DspFaust(SR,blockSize);
        dspFaust.start();
        dspFaust.setParamValue("/synth/gate", 1);
        //dspFaust.setParamValue(3, 1);
    }
    
    @Override
    public void onDestroy(){
        super.onDestroy();
        dspFaust.stop();
    }
}

At this point, we assume that you read both the README generated by faust2api as well as the faust2api documentation.

As you can see, all we're doing here is create a DspFaust object, initialize it with a specific sampling rate and block size in the onCreate method. Those were chosen to achieve the best audio latency performances on the Nexus 9 that we used to write this tutorial based on this table.

Next, we start the audio process and we set the value of the gate parameter declared in our Faust code to 1 to be able to hear something. Notice that both the parameter address and the parameter index can be used to carry out this operation. dspFaust.stop() takes care of gracefully shutting down audio.

Run the app on your device and you should hopefully hear a very ugly filtered sawtooth wave! The various parameters of your Faust object can be set at any time using the setParamValue method. Thus, you could add UI elements such as sliders, buttons, etc. to your interface and use them to control your synth.

The parameters of your DspFaust object can be easily retrieved simply by writing something like:

for(int i=0; i < dspFaust.getParamsCount(); i++){
    System.out.println(dspFaust.getParamAddress(i));
}

At this point you might wonder what happens if you want to use several Faust objects. Well the idea here is that you don't :)! You should basically make sure that your Faust code takes care of doing everything related to audio in your application.

Finally, things work pretty much the same way if the Faust code you're using has an audio input (i.e. effect). However, since accessing the microphone of the device is a more "sensitive" thing than what was done before, Android requires your to add the following line to your application manifest (before the application tag):

<uses-permission android:name="android.permission.RECORD_AUDIO"/>

While that used to be enough before API level 23, you now also have to request this permission at runtime. If you don't know how to do this, read this page. Also, a quick fix is to go in the app preferences on the device and under the permission tab, allow the use of the microphone. If you don't do this, you're app will crash when it starts which is not nice...

Making a MIDI Polyphonic Synthesizer

In this tutorial, we'll turn the basic synthesizer that we made in the previous section into a MIDI controllable polyphonic synth. We'll also add a reverb at the end of the chain to make things sound nicer.

Any Faust code can easily be turned into a polyphonic instrument using some the Faust C++ libraries. Fortunately, in this tutorial you wont have to worry about this and faust2api will take care of everything for you!

The API generated in the previous section is already MIDI enabled which means that you can control some of its parameters using MIDI controllers. You can read the chapter on MIDI in the Faust documentation if you want to learn more about that.

A Faust object can be turned into a polyphonic object simply by declaring the freq, gain and gate parameters. Thus, the Faust code that we wrote in the previous section can become a polyphonic instrument. For this, we need to re-generate our API simply by calling faust2api once again by specifying the number of voices of polyphony:

faust2api -android -nvoices 12 simpleSynth.dsp

It's better to have too many voices that not enough. Voices are only allocated and computed when they are needed anyway...

After this, replace the version of DspFaust.cpp in your Android app project with the one that was just generated (you don't need to replace the other files since they didn't change). Remove any setParamValue in MainActivity.java (the gate parameter is probably not accessible any more). You can now use a series of MIDI and polyphony related methods such as keyOn(), keyOff(), newVoice(), deleteVoice() etc. (see the README generated by faust2api for an exhaustive list). Try to add the following line to the onCreate method of MainActivity.java after dspFaust.start() and run your application:

dspFaust.keyOn(70,100);

The result should be quite similar to what we had before :) : we're generating a Bb (midi note 70) with a velocity of 100. This method can be used at any time to start a new note. The note can be terminated by calling:

dspFaust.keyOff(70);

keyOff should be called with the same MIDI note number than the one used with keyOn to end that specific voice.

setParamValue() can still be used to change the value of parameters other than freq, gain and gate and will affect all voices. If you wish to change a parameter for a specific voice, setVoiceParamValue() can be used.

In this tutorial, we want to be able to control our Faust synth using a MIDI keyboard. You could implement everything from scratch at this point in JAVA following the tutorial given on the Android website and using the methods presented above to interact with your polyphonic Faust object. Fortunately, Faust can actually can take care of all that for you too! The only thing that needs to be done is to pass MIDI events to our Faust object using the propagateMidi() method. You still have to implement your MIDI collector in JAVA, but the Faust API will take care of parsing the events and triggering the notes. Our goal here is not to teach you how to do that. If you have no idea of how this works, read this tutorial on the Android website. Just for the sake of the example, here's a simple (and somewhat dirty) way of doing this directly in our MainActivity.java:

public class MainActivity extends AppCompatActivity {
    DspFaust dspFaust;

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        int SR = 48000;
        int blockSize = 128;

        dspFaust = new DspFaust(SR,blockSize);
        dspFaust.start();

        class MyReceiver extends MidiReceiver {
            public void onSend(byte[] data, int offset,
                               int count, long timestamp) {
                // we only consider MIDI messages containing 3 bytes (see is just an example)
                if(count%3 == 0) {
                    int nMessages = count / 3; // in case the event contains several messages
                    for (int i = 0; i < nMessages; i++) {
                        int type = (int) (data[offset + i*3] & 0xF0);
                        int channel = (int) (data[offset + i*3] & 0x0F);
                        int data1 = (int) data[offset + 1 + i*3];
                        int data2 = (int) data[offset + 2 + i*3];
                        dspFaust.propagateMidi(3, timestamp, type, channel, data1, data2);
                    }
                }
            }
        }

        final MyReceiver midiReceiver = new MyReceiver();

        Context context = getApplicationContext();
        final MidiManager m = (MidiManager)context.getSystemService(Context.MIDI_SERVICE);
        final MidiDeviceInfo[] infos = m.getDevices();

        // opening all the available ports and devices already connected
        for(int i=0; i<infos.length; i++){
            final int currentDevice = i;
            m.openDevice(infos[i], new MidiManager.OnDeviceOpenedListener() {
                @Override
                public void onDeviceOpened(MidiDevice device) {
                    if (device == null) {
                        Log.e("", "could not open device");
                    } else {
                        for(int j=0; j<infos[currentDevice].getOutputPortCount(); j++) {
                            MidiOutputPort outputPort = device.openOutputPort(j);
                            outputPort.connect(midiReceiver);
                        }
                    }
                }
            }, new Handler(Looper.getMainLooper()));
        }

        // adding any newly connected device
        m.registerDeviceCallback(new MidiManager.DeviceCallback() {
            public void onDeviceAdded( final MidiDeviceInfo info ) {
                m.openDevice(info, new MidiManager.OnDeviceOpenedListener() {
                    @Override
                    public void onDeviceOpened(MidiDevice device) {
                        if (device == null) {
                            Log.e("", "could not open device");
                        } else {
                            for (int j = 0; j < info.getOutputPortCount(); j++) {
                                MidiOutputPort outputPort = device.openOutputPort(j);
                                outputPort.connect(midiReceiver);
                            }
                        }
                    }
                }, new Handler(Looper.getMainLooper()));
            }

            public void onDeviceRemoved( final MidiDeviceInfo info ) {

            }

        }, new Handler(Looper.getMainLooper()));
    }

    @Override
    public void onDestroy(){
        super.onDestroy();
        dspFaust.stop();
    }
}

At this point, if you re-run your app and connect a MIDI keyboard to it, you should be able to generate notes and play music! The same approach (retrieving "raw" information in JAVA and passing them to the native side of the app) can be used for the accelerometer and gyroscope by calling propagateGyr and propagateAcc

Not bad uh ;)? However several improvements can be made to our instrument here. First, we should make sure that smoothing happens on the freq and gain parameters of our Faust code only after the note was started. This will prevent the weird attack that we currently hear and that is due to the fact that every time a new note is started, its frequency is smoothed from its previous value to its new one.

Also, we can associate the cutoff frequency and the q parameters of the filter to MIDI controllers (1 and 2 in the example) directly from the Faust code using a metadata (check the MIDI section of the Faust documentation to have more information about this). Let's update simpleSynth.dsp as follows:

import("stdfaust.lib");
freq = nentry("freq",200,40,2000,0.01) : si.polySmooth(gate,0.999,2);
gain = nentry("gain",1,0,1,0.01) : si.polySmooth(gate,0.999,2);
gate = button("gate") : si.smoo; 
cutoff = nentry("cutoff[midi:ctrl 1]",5000,40,8000,0.01) : si.smoo;
q = nentry("q[midi:ctrl 2]",5,1,50,0.01) : si.smoo;
process = vgroup("synth",os.sawtooth(freq)*gain*gate : fi.resonlp(cutoff,q,1) <: _,_);

In addition to this, to make things sound even cooler, we will add a reverb to our synth. It is a bad idea to do this in simpleSynth.dsp since we don't want to have a different reverb for each voice (this would be very inefficient computationally...). Fortunately, faust2api can take a Faust effect as one of its arguments and plug it automatically to our polyphonic synth.

The Faust libraries contain a wide range of reverbs. We will use dm.zita_rev1. Create a new Faust file called effect.dsp and place it in the same folder as simpleSynth.dsp. effect.dsp should look like this:

import("stdfaust.lib");
process = dm.zita_rev1;

At this point, you can re-run faust2api with the following options:

faust2api -android -nvoices 12 -effect effect.dsp simpleSynth.dsp

Copy the new version of dspFaust.cpp in your Android app project and re-run it on your device. Try to play your synth with a MIDI keyboard and hopefully, everything should work very nicely! Note that some extra parameters should now be available to control with setParamValue so you might want to check the README again.

Using Built-In Sensors

In case you would like to use the the built-in accelerometer or gyroscope of your device to control some of the parameters of your Faust object, all you have to do is to send the raw accelerometer data to it by using the propagateAcc or propagateGyr for the gyroscope. After that, mappings can be configured directly from the Faust code using this technique or using the setAccConverter and setGyrConverter method.

For example, in addition to be controlled by MIDI CC 1, the cutoff parameter of the code presented in the previous section could be modified by the x axis of the built-in accelerometer:

cutoff = nentry("cutoff[midi:ctrl 1][acc: 0 0 -10 0 10]",5000,40,8000,0.01) : si.smoo;

Your MainActivity could then be modified to look like this (this is kind of simplistic but this is just for the sake for the example, plus it should work anyway):

public class MainActivity extends AppCompatActivity implements SensorEventListener {
    DspFaust dspFaust;
    private SensorManager sensorManager;
    private Sensor accelerometer;

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN,
                WindowManager.LayoutParams.FLAG_FULLSCREEN);

        int SR = 48000;
        int blockSize = 128;

        dspFaust = new DspFaust(SR,blockSize);
        dspFaust.start();

        sensorManager = (SensorManager) getSystemService(Context.SENSOR_SERVICE);
        accelerometer = sensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER);
        sensorManager.registerListener(this, accelerometer, SensorManager.SENSOR_DELAY_NORMAL);
    }

    @Override
    public void onAccuracyChanged(Sensor sensor, int accuracy) {
    }

    @Override
    public void onSensorChanged(SensorEvent event) {
        for (int i = 0 ;i<event.values.length;i++){
            dspFaust.propagateAcc(i, event.values[i]);
        }
    }

    @Override
    public void onDestroy(){
        super.onDestroy();
        dspFaust.stop();
    }
}

The source code of the app that we just made (that your probably downloaded at the beginning of the tutorial) implements all these features and contains some extra goodies. For example, the touch screen, is used as an X/Y control surface to modulate the cutoff frequency and the q of the filter used in our synth.

Note that Faust can also be used to generate ready-to-use Android apps directly from a Faust code using faust2android.

That's it folks! Once again, this is an ongoing project and we're probably missing many features here but if you notice that something doesn't work and that there's something that you can not do because of a missing feature, let me know and I'll take care of it.


Adding Faust Real-Time Audio Support to iOS Apps

Introduction

In this tutorial, we demonstrate how to use faust2api to add Faust based real-time audio support to any iOS application. While we'll make an iOS app from scratch here, the steps should be very similar for any existing app.

The high level API generated by faust2api allows to interact with any Faust C++ synthesizer/sound processor in a simple way. This system takes care of everything related to audio and that might be useful to turn a smartphone into a musical instrument: audio synthesis/processing, instantiating the audio engines, retrieving sensor data, parse MIDI messages, polyphony handling, etc.

While it won't have any pretty user interface, the iOS app we're about to make here will implement a MIDI controllable polyphonic synthesizer connected to a small effects chain.

We assume that the Faust distribution is already installed on your system and that you have some basic knowledge of Faust. If you don't, we recommend you to read the Faust Hero in 2 Hours Tutorial.

Additionally, we assume that your iOS development tool-chain is already functional (Xcode, command line tools, etc.) and that you have some experience doing iOS development.

QUICK NOTE: The Faust API used in this tutorial is entirely written in C++ and can therefore be easily integrated to an Objective-C-based app. It should all work fine with SWIFT too, but we never tested it. As a result, this demo only uses C++ and Objective-C.

ANOTHER QUICK NOTE: Our API uses some deprecated Apple frameworks (yes, they do this all the time), so you will probably get a fair amount of warnings during compilation due to that. This is all safe and fine and will not affect the behavior of your app. We'll fix that at some point but since it's not really a big deal, we're taking our time :).

The source code of the first part of this tutorial can be downloaded here.

Creating an App With Basic Faust Audio Support

In Xcode, create a new "Single View" iOS application project (File/New/Project/iOS/Single View Application). Give it the name you want (we used FaustInstrumentIOS for this tutorial), choose Objective-C as the language, and leave the default configuration for any other option.

Go in the app configuration by clicking on the name of your app (FaustInstrumentIOS in our case) in the tree in the menu on left. Then, in TARGETS, choose the name of your app once again. In Build Phases/Link Binaries With Libraries, add the CoreMIDI and the AudioToolbox frameworks:

CoreMIDI is not necessary for the first half of this tutorial so might choose to not import it at this point.

For the next step, you will need to have the LATEST version of Faust installed on your system. For this, just clone our git repository, then run cd faudiostream-code in a terminal and the usual:

make
sudo make install

Faust has no dependencies (beside C++...), so this should go very smoothly. Faust comes with a wide range of libraries that are now installed on your system (check this page for an exhaustive list). Let's use some of their elements to implement a simple synthesizer. Create a new text file and fill it with the following code:

import("stdfaust.lib");
freq = nentry("freq",200,40,2000,0.01) : si.smoo;
gain = nentry("gain",1,0,1,0.01) : si.smoo;
gate = button("gate") : si.smoo; 
cutoff = nentry("cutoff",5000,40,2000,0.01) : si.smoo;
process = vgroup("synth",os.sawtooth(freq)*gain*gate : fi.lowpass(3,cutoff) <: _,_);

and then save it as basicSynth.dsp. This is just an alias-free sawtooth connected to a 3d order lowpass filter. The frequency (freq), the gain (gain), the on/off parameter (gate) and the cut-off frequency (cutoff) are associated to UI elements here in order to control them in our Android app in JAVA. In practice, these UI elements will never be created but we will use their name to access the different parameters in our API. The Faust compiler will basically build a parameter names tree based on what was declared in the Faust code. For instance, the following parameters will be accessible:

/synth/freq
/synth/gain
/synth/gate
/synth/cutoff

More information will be given about this type of things in the following steps, so for now, you'll just have a trust us ;).

The different si.smoo are just integrators (normalized lowpass with a pole at 0.999) used to smooth the parameters to prevent clicking. We will see later that that si.smoo is also used on the gate parameter as a simple exponential envelope generator. <: _,_ splits the output of the lowpass filter into 2 different signals. Our Faust object would be valid without it but there would be audio only on one channel. Once again, if you have no clue of what we're talking about here, read our Faust Hero in 2 Hours Tutorial.

Let's now turn this Faust code into an Android API simply by running the following command in a terminal:

faust2api -android simpleSynth.dsp

This should create a zip file called dsp-faust.zip containing all the source files needed to embed our Faust object in the Android app as well as a markdown documentation specific to that object (README.md) that we strongly recommend you to read now. Additionally, we encourage you to check the faust2api documentation at this point.

Drag-and-drop DspFaust.cpp and DspFaust.h in your app tree in Xcode. Change the extension of ViewController.m to ViewController.mm. This will allow us to import the C++ API that you just created in this file. You app tree should now look the same as the one presented in the screen-shot above.

As of iOS 10, you should also configure the "Privacy - Microphone Usage Description" key in Info.plist under "Information Property List". Just put anything you want for the value (like "yes"). If you don't do it, your app will crash on start-up.

You're now ready to use the Faust DSP object in your app! Modify ViewController.mm to make it look like this:

#import "ViewController.h"
#import "DspFaust.h"

@interface ViewController ()

@end

@implementation ViewController{
    DspFaust *dspFaust;
}
    
- (void)viewDidLoad {
    [super viewDidLoad];
          
    const int SR = 44100;
    const int bufferSize = 256;
                      
    dspFaust = new DspFaust(SR,bufferSize);
    dspFaust->start();
    dspFaust->setParamValue("/synth/gate", 1);
    //dspFaust->setParamValue(3, 1);
}
                              
- (void)didReceiveMemoryWarning {
    [super didReceiveMemoryWarning];
                                    
    dspFaust->stop();
    delete dspFaust;
}
                                            
@end

At this point, we assume that you read both the README generated by faust2api as well as the faust2api documentation.

As you can see, all we're doing here is create a DspFaust object, initialize it with a specific sampling rate and block size in the viewDidLoad method.

Next, we start the audio process and we set the value of the gate parameter declared in our Faust code to 1 to be able to hear something. Notice that both the parameter address and the parameter index can be used to carry out this operation. dspFaust->stop() takes care of gracefully shutting down audio.

Run the app on your device and you should hopefully hear a very ugly filtered sawtooth wave! The various parameters of your Faust object can be set at any time using the setParamValue method. Thus, you could add UI elements such as sliders, buttons, etc. to your interface and use them to control your synth.

The parameters of your DspFaust object can be easily retrieved simply by writing something like:

for(int i=0; i < dspFaust->getParamsCount(); i++){
    std::cout << dspFaust->getParamAddress(i) << "\n";
}

At this point you might wonder what happens if you want to use several Faust objects. Well the idea here is that you don't :)! You should basically make sure that your Faust code takes care of doing everything related to audio in your application.

Finally, things work pretty much the same way if the Faust code you're using has an audio input (i.e. effect).

Making a MIDI Polyphonic Synthesizer

In this tutorial, we'll turn the basic synthesizer that we made in the previous section into a MIDI controllable polyphonic synth. We'll also add a reverb at the end of the chain to make things sound nicer.

Any Faust code can easily be turned into a polyphonic instrument using some the Faust C++ libraries. Fortunately, in this tutorial you wont have to worry about this and faust2api will take care of everything for you! You can read the chapter on MIDI in the Faust documentation to learn more about how to configure the MIDI behavior of a Faust object.

A Faust object can be turned into a polyphonic object simply by declaring the freq, gain and gate parameters. Thus, the Faust code that we wrote in the previous section can become a polyphonic instrument. For this, we need to re-generate our API simply by calling faust2api once again by specifying the number of voices of polyphony:

faust2api -ios -nvoices 12 -midi simpleSynth.dsp

It's better to have too many voices that not enough. Voices are only allocated and computed when they are needed anyway...

The -midi option adds RtMidi support to the API. This will make sure that any MIDI device (physical or virtual) that gets connected to your iOS device can automatically control our Faust object!

After this, replace the version of DspFaust.cpp in your app project with the one that was just generated (you don't need to replace the header file since it didn't change). Remove any setParamValue in ViewController.mm (the gate parameter is probably not accessible any more). You can now use a series of MIDI and polyphony related methods such as keyOn(), keyOff(), newVoice(), deleteVoice() etc. (see the README generated by faust2api for an exhaustive list). Try to add the following line to the viewDidLoad method of ViewController.mm after dspFaust->start() and run your application:

dspFaust->keyOn(70,100);

The result should be quite similar to what we had before :) : we're generating a Bb (midi note 70) with a velocity of 100. This method can be used at any time to start a new note. The note can be terminated by calling:

dspFaust->keyOff(70);

keyOff should be called with the same MIDI note number than the one used with keyOn to end that specific voice.

setParamValue() can still be used to change the value of parameters other than freq, gain and gate and will affect all voices. If you wish to change a parameter for a specific voice, setVoiceParamValue() can be used.

Because we used the -midi option when the API was generated, you should be able to generate notes and play music if you connect a MIDI keyboard to your iOS device: it's that simple!

Not bad uh ;)? However several improvements can be made to our instrument here. First, we should make sure that smoothing happens on the freq and gain parameters of our Faust code only after the note was started. This will prevent the weird attack that we currently hear and that is due to the fact that every time a new note is started, its frequency is smoothed from its previous value to its new one.

Also, we can associate the cutoff frequency parameter of the filter to a MIDI controller (20 in the example) directly from the Faust code using a metadata (check the MIDI section of the Faust documentation to have more information about this). Let's update basicSynth.dsp as follows:

import("stdfaust.lib");
freq = nentry("freq",200,40,2000,0.01) : si.polySmooth(gate,0.999,2);
gain = nentry("gain",1,0,1,0.01) : si.polySmooth(gate,0.999,2);
gate = button("gate") : si.smoo; 
cutoff = nentry("cutoff[midi:ctrl 20]",5000,40,8000,0.01) : si.smoo;
process = vgroup("synth",os.sawtooth(freq)*gain*gate : fi.lowpass(3,cutoff) <: _,_);

In addition to this, to make things sound even cooler, we will add a reverb to our synth. It is a bad idea to do this in basicSynth.dsp since we don't want to have a different reverb for each voice (this would be very inefficient computationally...). Fortunately, faust2api can take a Faust effect as one of its arguments and plug it automatically to our polyphonic synth.

The Faust libraries contain a wide range of reverbs. We will use dm.zita_rev1. Create a new Faust file called effect.dsp and place it in the same folder as basicSynth.dsp. effect.dsp should look like this:

import("stdfaust.lib");
process = dm.zita_rev1;

At this point, you can re-run faust2api with the following options:

faust2api -ios -nvoices 12 -midi -effect effect.dsp simpleSynth.dsp

Copy the new version of dspFaust.cpp in your Android app project and re-run it on your device. Try to play your synth with a MIDI keyboard and hopefully, everything should work very nicely! Note that some extra parameters should now be available to control with setParamValue so you might want to check the README again.

Using Built-In Sensors

In case you would like to use the the built-in accelerometer or gyroscope of your device to control some of the parameters of your Faust object, all you have to do is to send the raw accelerometer data to it by using the propagateAcc or propagateGyr for the gyroscope. After that, mappings can be configured directly from the Faust code using this technique or using the setAccConverter and setGyrConverter method.

For example, in addition to be controlled by MIDI CC 20, the cutoff parameter of the code presented in the previous section could be modified by the x axis of the built-in accelerometer:

cutoff = nentry("cutoff[midi:ctrl 20][acc: 0 0 -10 0 10]",5000,40,8000,0.01) : si.smoo;

ViewController.mm could then look like this (you will have to import the CoreMotion framework to your project for this to work):

#import "ViewController.h"
#import "DspFaust.h"
#import <CoreMotion/CoreMotion.h>

#define kMotionUpdateRate 30
#define ONE_G 9.81

@interface ViewController ()

@end

@implementation ViewController{
    DspFaust *dspFaust;
    CMMotionManager* _motionManager;
    NSTimer* _motionTimer;
}

- (void)viewDidLoad {
    [super viewDidLoad];
    
    const int SR = 44100;
    const int bufferSize = 256;
    
    dspFaust = new DspFaust(SR,bufferSize);
    dspFaust->start();
    
    [self startMotion];
}


- (void)didReceiveMemoryWarning {
    [super didReceiveMemoryWarning];
    
    [self stopMotion];
    dspFaust->stop();
    delete dspFaust;
}

- (void)startMotion
{
    if (_motionManager == nil)
    {
        _motionManager = [[CMMotionManager alloc] init];
        [_motionManager startAccelerometerUpdates];
        [_motionManager startGyroUpdates];
    }
    _motionTimer = [NSTimer scheduledTimerWithTimeInterval:1./kMotionUpdateRate target:self 
        selector:@selector(updateMotion) userInfo:nil repeats:YES];
}

// Stop updating sensors
- (void)stopMotion
{
    if (_motionManager != nil)
    {
        [_motionManager stopAccelerometerUpdates];
        [_motionManager stopGyroUpdates];
        _motionManager = nil;
        [_motionTimer invalidate];
    }
}

- (void)updateMotion
{
    dspFaust->propagateAcc(0, _motionManager.accelerometerData.acceleration.x * ONE_G);
    dspFaust->propagateAcc(1, _motionManager.accelerometerData.acceleration.y * ONE_G);
    dspFaust->propagateAcc(2, _motionManager.accelerometerData.acceleration.z * ONE_G);
    dspFaust->propagateGyr(0, _motionManager.gyroData.rotationRate.x);
    dspFaust->propagateGyr(1, _motionManager.gyroData.rotationRate.y);
    dspFaust->propagateGyr(2, _motionManager.gyroData.rotationRate.z);
}

@end

This is kind of "dirty" but it should give you an idea of how this works ;).

Note that Faust can also be used to generate ready-to-use iOS apps directly from a Faust code using faust2api.

That's it folks! From here, you should be able to easily add a UI to your app and control some of the parameters of your Faust object with it. Once again, this is an ongoing project and we're probably missing many features here, but if you notice that something doesn't work and that there's something that you can not do because of a missing feature, let me know and I'll take care of it.


Making Faust-Based Smartphone Musical Instruments

Smartphones gather in a single entity a set of sensors (touch screen, accelerometer, gyroscope, etc.), a powerful computer, a microphone, a speaker, and they are also self-powered which makes them an ideal platform to implement standalone musical instruments. Thanks to these features, they are also much closer to "traditional" acoustic instruments than other electronic musical instruments where the controller, the synthesizer and the sound production parts are separated.

This tutorial demonstrates how to make the "virtual portion" of such instruments using the Faust programming language and its SmartKeyboard app generator. We assume that you already have some background in sound synthesis and that you have a basic knowledge of Faust. If you don't, we strongly advise you to read our Faust Hero in 2 Hours tutorial first.

You MUST use a Linux or a Mac computer for this tutorial: none of this will work on Windows (sorry)! Additionally, if you're planning on using an iOS device, then you will have to use a Mac computer to program it. Finally, you should make sure that the mobile device your will use meets these requirements.

Getting Ready

First, you should install the LATEST version of Faust on your computer.

Installing Faust

Faust must be compiled and installed from its source code. The features used in this tutorial are relatively new and will probably not be available in any pre-compiled package so PLEASE, FOLLOW THESE INSTRUCTIONS (don't use MacPorts, Brew, APT, etc.)!

The latest version of the source code of Faust can be downloaded from our git repository. Unzip this file in a folder that you will remember (that's important too :) ) and open a terminal window. In the terminal, go to the faust-master folder that you just extracted (this can be simply done by typing cd space and then drag and drop the folder in the terminal and pressing return):

cd ~/faust-master

To view the content of the Faust repository, you can just type:

ls

Now, let's compile Faust by typing:

make

This shouldn't take more than 4 minutes depending on how powerful you machine is. Hopefully, you wont get any error (if you do, send me an email). Now install Faust by typing (this will require you to enter your password):

sudo make install

To make sure that Faust was properly installed, type:

faust

If you get an error message saying Error no input file, then you're good!

Getting Ready to Develop Apps

For this, follow the instructions on setting up your system for faust2smartkeyb.

Make Your First App

The final Faust code of this tutorial can be downloaded here.

Making a Simple Synthesizer App Project

First, write this simple Faust code:

import("stdfaust.lib";)
f = nentry("freq",200,40,2000,0.01);
g = nentry("gain",1,0,1,0.01);
t = button("gate");
process = os.sawtooth(f)*g*t <: _,_;

and store it in a file called mySynth.dsp. Make sure that you use a "reasonably descent" text editor for that (e.g., TextMate, Xed, etc. and not TextEdit, etc.). Create a faustWorkspace folder on your system that we'll use as our working directory and save mySynth.dsp in it.

The code that you just wrote should speak for itself. It's just a sawtooth wave generator where the frequency is controlled by the freq parameter. gain and gate are used respectively to scale the gain of the signal and as a trigger. We'll see later in this tutorial that the naming conventions used here matter since they allow to connect a SmartKeyboard interface to this synth (freq will be formatted in function of the MIDI note number, gain, the MIDI velocity and gate, the MIDI note-on/note-off). Check this page to get the full list of standard parameters compatible with SmartKeyboard. Note that the output of the sawtooth wave oscillator is split into 2 signals in order to get sound on both the right and left channels of the device we're using.

The type of user interface elements (e.g., hsliders, vsliders, etc.) that you choose to use here doesn't matter since the "standard" Faust interface will be replaced by a SmartKeyboard interface. We only use nentry and button here as a way to declare parameters for our synth.

Now open a new terminal window and go in faustWorkspace using the cd command. From here, run:

faust2smartkeyb -android -reuse -source mySynth.dsp

to make an Android app or:

faust2smartkeyb -ios -reuse -source mySynth.dsp

to make an iOS app.

In both cases, this will create a folder called faustsmartkeyb.mySynth in faustWorkspace. This folder contains an Android Studio project if you used -android and an Xcode project if you used -ios. In both cases, you will find the complete source code of a ready-to-be-compiled app.

Installing the App on An Android Device

Open Android Studio and open faustsmartkeyb.mySynth with it. Android Studio will take care of configuring the app project for your system and will tell you if some adjustments need to be made (e.g., if you don't have the right version of the Android SDK installed, etc.). Connect an Adnroid device to your computer and make sure that developer mode is enabled on it. If you don't know how to do this, a Google search should help you figure it out (it varies between devices).

Now, click on "Run/Run App", your device should appear in the prompted list, select it and check "Use same selection for future launches". Click on "OK", the app should start building (this might take a while since it's the first time you build it). If the build process was successful (fingers crossed), then the app should launch on your device. You should see a "weird" keyboard. If you press one of the keys on it, you should get some sound!

Installing the App on An iOS Device (Mac Only)

Open faust2smartkeyb.mySynth with the finder and double click on Faust.xcodeproj. The Xcode project corresponding to the Faust code that you just wrote should open. At this point, you'll probably have to fix signing issues (see this). Sorry, but this might take a while to figure out and we can't really help you with that here (thanks Apple).

Assuming that your app is properly signed and that you fixed issues related to that, connect an iOS device to your computer an select it as the target on the top left corner of the window, right next to the "play" and "stop" buttons. Now click on the "Play" button and the app should start building and will launch after that. You might get a bunch of warnings, don't pay attention to them :). You should now see a "weird" keyboard on the screen of your iOS device. If you press one of the keys on it, you should get some sound!

Customizing the Interface

The steps presented in the previous section will have to be repeated every time an update is made to the Faust code (running faust2smartkeyb and building the app in Android Studio or Xcode). You don't have to close Android Studio or Xcode since faust2smartkeyb will simply update the existing project as long as you use the -reuse option.

Now let's customize the interface of our app by configuring the SmartKeyboard interface (check the SmartKeyboard documentation for more information on this). At the beginning of your Faust code (right before the declaration of the freq parameter), add the following interface declaration:

declare interface "SmartKeyboard{
    'Number of Keyboards':'2',
    'Keyboard 0 - Number of Keys':'13',
    'Keyboard 1 - Number of Keys':'13',
    'Keyboard 0 - Lowest Key':'72',
    'Keyboard 1 - Lowest Key':'60'
}";

Here, we're configuring our app's UI to have 2 keyboards of 13 keys (one octave). The first note of the top keyboard is MIDI note 72 (C4) and the one of the bottom keyboard is 62 (C3). Save the file, re-run faust2smartkeyb with the same parameters as before and rebuild and relaunch the app in Android Studio or Xcode. Your app should now look like this:

By default, this interface starts a new note every time a finger slides to a new key and an independent voice is associated to each finger (thanks to faust2api, SmartKeyboard takes care of voice allocation by default). We'll see later in this tutorial that this type of behavior can easily be changed.

Adding More Control

Now let's try to map the gain of the generated sound to the Y position of the finger on each keyboard. The X and Y position of a specific event can be easily retrieved in the Faust code simply by setting the Keyboard N - Send X and Keyboard N - Send Y configuration keys to 1 and by using the x and y standard parameters. In our case, we'll only need y. The previous Faust code can be rewritten to look like this:

declare interface "SmartKeyboard{
  'Number of Keyboards':'2',
  'Keyboard 0 - Number of Keys':'13',
  'Keyboard 1 - Number of Keys':'13',
  'Keyboard 0 - Lowest Key':'72',
  'Keyboard 1 - Lowest Key':'60',
  'Keyboard 0 - Send Y':'1',
  'Keyboard 1 - Send Y':'1'
}";

import("stdfaust.lib");

f = nentry("freq",200,40,2000,0.01);
g = nentry("gain",1,0,1,0.01);
t = button("gate");
y = nentry("y",0.5,0,1,0.01); // y is always normalized between 0 and 1

gain = y;
envelope = t*gain : si.smoo;

process = os.sawtooth(f)*envelope <: _,_;

The x and y parameters receive the normalized (0-1) position of the finger on the current key. Here we're just using y to scale the gain. Note that the smoothing function (si.smoo) is both applied to the trigger signal and to gain. That way, we get an exponential envelope generator for free...

Adding Continuous Frequency Control

Continuous pitch control is a very important feature needed to implement vibrato, slides, etc. By default, SmartKeyboard doesn't enable it. In this section, we demonstrate how to add it to the previous code.

Continuous pitch control is implemented through the bend default parameter in SmartKeyboard. bend is a coefficient that should be multiplied to freq (that serves as point of reference for the generated pitch). In other words, we need to write something like:

f = nentry("freq",200,40,2000,0.01);
bend = nentry("bend",1,0,10,0.01);
freq = f*bend;

Since we want to prevent clicks every time a new value of bend is provided, we need to smooth it down. In that very specific case, it's a bad idea to use si.smoo. Indeed, SmartKeyboard only changes the value of bend if gate>0. It means that the last value of bend is held until a new note is started. However, since si.smoo is constantly smoothing, it means that it will take some time for the value of bend to go from its previous value to the current one when a new note is started. This might result in an "ugly sweep" at the beginning of the note, especially if the previous pitch used for this voice is far apart for the new one. Thankfully, the Faust libraries implement si.polySmooth, a smoothing function that doesn't smooth for a period of n samples when a trigger signal is sent. Thus, we can rewrite the previous code using that function (where n = 1):

f = nentry("freq",200,40,2000,0.01);
bend = nentry("bend",1,0,10,0.01) : si.polySmooth(t,0.999,1);
t = button("gate");
freq = f*bend;

Note that bend is smoothed here (not freq). In theory, freq should never have to be smoothed when using SmartKeyboard.

Currently, the value of bend is not changing because SmartKeyboard doesn't enable continuous pitch control by default. To change that, we just need to configure the Rounding Mode key. The default mode is 0: no continuous pitch control (bend is never updated). If Rounding Mode = 1, the value of bend is updated continuously in function of the position of the finger on the keyboard. While this mode can be useful in some cases, it makes the keyboard sound out of tune since the pitch is never rounded to the nearest integer. Finally, if Rounding Mode = 2, the value of bend is updated only if the finger keeps moving on the keyboard, allowing it to be both in tune and continuous. The pitch rounding behavior can be configured using the Rounding Cycle, the Rounding Smooth, the Rounding Threshold, and the Rounding Update Speed (each are clickable).

For now, we'll choose to use mode 2 which is the most common one. The updated Faust code should now look like that:

declare interface "SmartKeyboard{
  'Number of Keyboards':'2',
  'Rounding Mode':'2',
  'Keyboard 0 - Number of Keys':'13',
  'Keyboard 1 - Number of Keys':'13',
  'Keyboard 0 - Lowest Key':'72',
  'Keyboard 1 - Lowest Key':'60',
  'Keyboard 0 - Send Y':'1',
  'Keyboard 1 - Send Y':'1'
}";

import("stdfaust.lib");

f = nentry("freq",200,40,2000,0.01);
bend = nentry("bend",1,0,10,0.01) : si.polySmooth(t,0.999,1);
g = nentry("gain",1,0,1,0.01);
t = button("gate");
y = nentry("y",0.5,0,1,0.01);

freq = f*bend;
gain = y*g;
envelope = t*gain : si.smoo;

process = os.sawtooth(freq)*envelope <: _,_;

MIDI Support

Well, there's not much to do since it's already there! Indeed, SmartKeyboard implements built-in MIDI support by default so if you connect a MIDI keyboard to your mobile device (you'll probably need an adapter for that), you should be able to control the synth that we just implemented.

As this was mentioned above, all you have to do to make a MIDI compatible synthesizer is to declare the freq, gain and gate parameters. Additionally, SmartKeyboard is compatible with all the standard Faust MIDI metadata (find out more about this in the "MIDI Support" section of the Faust Documentation).

We can improve a little bit the code from last section to implement some MIDI-keyboard-specific behaviors. For instance, the bend parameter is not associated to the pitch bend wheel of the keyboard by default. To implement this feature, we just have to add the [midi:pitchwheel] metadata to the bend parameter declaration. Additionally, if we want events coming from the sustain pedal to be taken into account, we need to create a new parameter associated to MIDI CC 64 (which corresponds to the sustain pedal in the MIDI standard) such that:

bend = nentry("bend[midi:pitchwheel]",1,0,10,0.01) : si.polySmooth(gate,0.999,1);
s = nentry("sustain[midi:ctrl 64]",0,0,1,1);
t = button("gate");
gate = t+s : min(1);

Thus, the value of gate is equal to one if t or s are equal to one.

The same approach could be used to map a specific parameter to the expression wheel by associating that parameter to MIDI CC 1 using the [midi:ctrl 1] metadata, etc.

Adding Audio Effects

While we could simply add an audio effect to the output of our synthesizer by modifying the previous code, it's a pretty bad idea since we don't want it to be re-instantiated for each voice. Thankfully, faust2smartkeyb allows you to provide a specific file containing the audio effect portion of your app before compilation. The only requirement is that the number of outputs of your synth should be the same as the number of inputs of your effect.

Create a new Faust file and call it myEffect.dsp. Place the following code in it:

import("stdfaust.lib");
process = dm.zita_rev1;

zita_rev1 is defined in the Faust libraries and implements a nice reverb. It has a stereo input and a stereo output, so we should send it a stereo signal (which is the case of the synth we implemented in the previous sections). The default output gain of dm.zita_rev1 is kind of low, so you might have to adjust it (either by looking at its source code in the Faust libs or simply by multiplying its outputs by some kind of coefficient).

To add myEffect.dsp to our app, just add the -effect option to the faust2smartkeyb command followed by the name of the effect file:

faust2smartkeyb -android -effect myEffect.dsp -reuse -source mySynth.dsp

That's it! You should now hear a stereo reverb applied to the sound of your synth.

Once again, the final product of this tutorial can be downloaded here.

Dealing With Polyphony and Monophony: Electric Guitar With Isomorphic Keyboard

In this section, we demonstrate how to configure the polyphonic and monophonic behavior of a SamrtKeyboard interface. For that, we'll make an app implementing a simple "electric guitar" physical model controlled by an isomorphic keyboard.

Overview of Polyphony and Monophony Handling with SmartKeyboard

By default, faust2smartkeyb creates a polyphonic interface (one finger = one independent voice). The first thing you should keep in mind is that 1+1 = 2, meaning that if you don't scale the gain of your synth, it is quite possible that it will click if lots of voices are added. You should not scale the gain in function of the number of active voices though, that would feel and sound weird! Instead, you want to statically change the gain even if you might loose some dynamic range by doing so. After all, that's how an acoustic piano works, right? If your app contains an effect, we recommend you to it at this level to save some computation, otherwise you could do it directly in your synth voice implementation. Keep in mind that some older devices (especially on Android) might have a limited CPU power, so paying attention to performances when doing mobile development is quite important.

The polyphonic and monophonic behavior of a SamrtKeyboard interface can be precisely configured. First, the internal maximum number of voices of the synthesizer can be set before compilation using the -nvoices option. For example to create a synth with a max number of 20 voices, you could run something like:

faust2smartkeyb -android -nvoices 20 -reuse -source mySynth.dsp

In practice, this parameter is just a safeguard since only active voices are computed and allocated. Keep in mind that some mobile devices might have a limited CPU power, so you might have to set a relatively low limit in some cases depending on the complexity of your synth.

In addition to that, the maximum number of voices of a SmartKeyboard interface can be configured using the Max Keyboard Polyphony key. This parameter should be smaller or equal to the one configured with -nvoices. These 2 things are different. For example, if your synth has a long release (e.g., a string physical model), then voices will remain active until the release ends. Max Keyboard Polyphony corresponds to the number of keys pressed on the keyboard while -nvoices is the actual number of voices allowed by the system.

To make things even more complicated, a Max Fingers key can be configured to control the maximum number of fingers allowed on the screen. Indeed, we'll see later that it is possible to have keyboards and other types of elements in a SmartKeyboard interface, and this parameter can become useful in that case.

Activating the monophonic mode can be done simply by setting Max Keyboard Polyphony to 1. The behavior of SmartKeyboard can be adjusted in that case by configuring the Mono Mode key.

More information about his can be found in the faust2smartkeyb documentation.

Electric Guitar Physical Model

The complete code of this tutorial can be downloaded here.

The electric guitar physical model used in this tutorial is declared in the Faust physical modeling library and is called elecGuitar. It is based on a single steel string connected to stiff bridge and nut:

import("stdfaust.lib");

// standard parameters
f = hslider("freq",300,50,2000,0.01);
bend = hslider("bend[midi:pitchwheel]",1,0,10,0.01) : si.polySmooth(gate,0.999,1);
gain = hslider("gain",1,0,1,0.01);
s = hslider("sustain[midi:ctrl 64]",0,0,1,1); // for sustain pedal
t = button("gate");

// mapping params
gate = t+s : min(1);
freq = f*bend : max(60); // min freq is 60 Hz

stringLength = freq : pm.f2l;
pluckPosition = 0.8;
mute = gate : si.polySmooth(gate,0.999,1);

process = pm.elecGuitar(stringLength,pluckPosition,mute,gain,gate) <: _,_;

The top level parameters of our instruments connected to the SmartKeyboard interface are just the frequency of the string freq, bend for continuous pitch control, gain for the pluck velocity, sustain in case a MIDI sustain pedal is used, and gate for plucks. Note that the frequency of the string can't be lower than 60Hz (having very low frequencies might mess up with the string model).

The mute parameter of the model is controlled using gate. That way, when a finger leaves the touchscreen the virtual string currently in used is damped.

Finally, an electric guitar wouldn't be one without a distortion! We can create a separate effect Faust file implementing a stereo distortion (using ef.cubicnl):

import("stdfaust.lib");
distDrive = 0.8;
distOffset = 0;
process = par(i,2,ef.cubicnl(distDrive,distOffset)) : dm.zita_rev1;

and connect it to our guitar synth using the -effect option when running faust2smartkeyb. Note that we're also using a reverb here :).

Monophonic Isomorphic Keyboard

Here we want to create an isomorphic keyboard where each keyboard is monophonic and implements a "string." Keyboards should be one fourth apart from each other (more or less like on a guitar). We want to be able to slide between keyboards (strum) to trigger a new note (voice) and we want new fingers on a keyboard to "steal" the pitch from the previous finger (sort of hammer on) which is the default monophonic mode anyway.

declare interface "SmartKeyboard{
  'Number of Keyboards':'6',
  'Max Keyboard Polyphony':'1',
  'Keyboard 0 - Number of Keys':'13',
  'Keyboard 1 - Number of Keys':'13',
  'Keyboard 2 - Number of Keys':'13',
  'Keyboard 3 - Number of Keys':'13',
  'Keyboard 4 - Number of Keys':'13',
  'Keyboard 5 - Number of Keys':'13',
  'Keyboard 0 - Lowest Key':'72',
  'Keyboard 1 - Lowest Key':'67',
  'Keyboard 2 - Lowest Key':'62',
  'Keyboard 3 - Lowest Key':'57',
  'Keyboard 4 - Lowest Key':'52',
  'Keyboard 5 - Lowest Key':'47',
  'Rounding Mode':'2'
}";

This code should be placed before the physical model implementation. The interface of the app should now look like this:

The final product of this tutorial can be downloaded here.

Rock on!

Using Built-In Sensors and Implementing X/Y Controllers: Making Sound Toys

In this section, we demonstrate how to use the built-in sensors of mobile devices to control the parameters of a Faust object. This system is standardized and should work with other Faust architectures (e.g., faust2android, faust2ios, etc.).

We also show how a SmartKeyboard interface can be configured as a multitouch X/Y controller that doesn't involve polyphony and generates sound continuously.

The final product of this tutorial can be downloaded here.

Using Built-In Sensors to Control Parameters

Beside their touch screen, mobile devices usually host a wide range of built-in sensors such as an accelerometer, a gyroscope, etc. When combined with the touch screen interface, they can be use to implement advanced gestures.

Using these sensors to control some of the parameters is incredibly easy in Faust-based mobile apps. All you need to do is to use the acc metadata in the parameter declaration of your Faust code. This metadata has five parameters and has the following syntax:

[acc: a b c d e] // for accelerometer
[gyr a b c d e] // for gyroscope

and can be used that way in a Faust UI parameter declaration:

parameter = nentry("UIparamName[acc: a b c d e]",def,min,max,step);

with:

This allows to create complex linear and non-linear mapping that are summarized in this figure:

For example, controlling the gain of a synthesizer using the X axis of the accelerometer can easily be done simply by writing something like:

g = nentry("gain[acc: 0 0 -10 0 10]",0.5,0,1,0.01);

With this configuration, g = 0 when the device is standing vertically on its right side, g = 0.5 when the device is standing horizontally with screen facing up, and g = 1 when the device is standing vertically on its left side.

Finally, in this slightly more complex mapping, g = 0 when the device is tilted on its right side and the value of g increases towards 1 when the device is tilted on its left side:

g = nentry("gain[acc: 0 0 0 0 10]",0,0,1,0.01);

Complex nonlinear mappings can be implemented using this system.

X/Y Controller With Continuous Sound

Now we'll demonstrate how to make an app implementing a "sound toy" that continuously generates sound and doesn't use the polyphony system of SmartKeyboard at all.

First, let's implement a simple sound generator based on a filtered impulse train:

import("stdfaust.lib");

// parameters
x0 = hslider("x0",0.5,0,1,0.01) : si.smoo;
y0 = hslider("y0",0.5,0,1,0.01) : si.smoo;
y1 = hslider("y1",0,0,1,0.01) : si.smoo;
q = hslider("q[acc: 0 0 -10 0 10]",30,10,50,0.01) : si.smoo;
del = hslider("del[acc: 0 0 -10 0 10]",0.5,0.01,1,0.01) : si.smoo;
fb = hslider("fb[acc: 1 0 -10 0 10]",0.5,0,1,0.01) : si.smoo;

// mapping
impFreq = 2 + x0*20;
resFreq = y0*3000+300;

// simple echo effect
echo = +~(de.delay(65536,del*ma.SR)*fb);

// putting it together
process = os.lf_imptrain(impFreq) : fi.resonlp(resFreq,q,1) : echo : ef.cubicnl(y1,0)*0.95 <: _,_;

os.lf_imptrain is used to generate clicks at low frequencies (between 2 and 22 per seconds according to impFreq). This clicks are filtered by a resonant lowpass filter (fi.resonlp) w hose Q and resonant frequency are controlled using the accelerometer. To create some density, this all goes to an echo effect whose feedback (fb) and delay duration (del) are controlled with the accelerometer too. Finally, because smartphones tend to be quiet and because we don't really care about having a "dirty" sound here, we send the output of the echo to a distortion. The "overdrive" parameter of the distortion is controlled using the y position of the second finger on the screen (y1).

For the interface, we just want a blank screen where the position of the different fingers on the screen can be tracked and retrieved in the Faust object. For that, we create one keyboard with one key filling the screen. We ask the interface to not compute the freq and bend parameters to save computation by setting 'Keyboard 0 - Send Freq':'0'. We don't want the color of the key to change when it is touched so we activate the Static Mode mode. Fingers should be numbered to be able to use the numbered x and y parameters (x0, y0, x1, etc.), so Send Numbered X and Send Numbered Y are enabled. Finally, by setting Max Keyboard Polyphony to 0, we deactivate the voice allocation system and we automatically start a voice when the app is launched. This means that fingers are no longer associated to specific voices.

declare interface "SmartKeyboard{
  'Number of Keyboards':'1',
  'Max Keyboard Polyphony':'0',
  'Keyboard 0 - Number of Keys':'1',
  'Keyboard 0 - Send Freq':'0',
  'Keyboard 0 - Static Mode':'1',
  'Keyboard 0 - Piano Keyboard':'0',
  'Keyboard 0 - Send Numbered X':'1',
  'Keyboard 0 - Send Numbered Y':'1'
}";

As for the previous examples, this code should be placed right before the synthesizer implementation. Now try to build an orchestra!

The final product of this tutorial can be downloaded here.

Using Keys as Independent Pads: Making Drum Pads

In this section, we present a possible strategy (we'll see that this can be done in a different way in this section) to control independent instruments using the SmartKeyboard interface. For instance, we'll control three different drum modal physical models using three "pads" implemented as keyboard keys on the screen.

The final product of this tutorial can be downloaded here.

declare interface "SmartKeyboard{
    'Number of Keyboards':'2',
    'Keyboard 0 - Number of Keys':'2',
    'Keyboard 1 - Number of Keys':'1',
    'Keyboard 0 - Static Mode':'1',
    'Keyboard 1 - Static Mode':'1',
    'Keyboard 0 - Send X':'1',
    'Keyboard 0 - Send Y':'1',
    'Keyboard 1 - Send X':'1',
    'Keyboard 1 - Send Y':'1',
    'Keyboard 0 - Piano Keyboard':'0',
    'Keyboard 1 - Piano Keyboard':'0',
    'Keyboard 0 - Key 0 - Label':'High',
    'Keyboard 0 - Key 1 - Label':'Mid',
    'Keyboard 1 - Key 0 - Label':'Low'
}";

import("stdfaust.lib");

// standard parameters
gate = button("gate");
x = hslider("x",1,0,1,0.001);
y = hslider("y",1,0,1,0.001);
keyboard = hslider("keyboard",0,0,1,1) : int;
key = hslider("key",0,0,1,1) : int;

drumModel = pm.djembe(rootFreq,exPos,strikeSharpness,gain,gate)
with{
    // frequency of the lowest drum
    bFreq = 60;
    // retrieving pad ID (0-2)
    padID = 2-(keyboard*2+key);
    // drum root freq is computed in function of pad number
    rootFreq = bFreq*(padID+1);
    // excitation position
    exPos = min((x*2-1 : abs),(y*2-1 : abs));
    strikeSharpness = 0.5;
    gain = 2;
};

process = drumModel <: _,_;

djembe() is declared in physmodels.lib and implements a simple drum/djembe physical model using model synthesis. The root frequency (rootFreq) can be configured for this instrument, allowing us to easily change the "size" of the virtual drum.

The interface is made out of 3 pads (one keyboard with 2 keys and one keyboard with 1 key). Each pad is mapped to a specific root frequency by retrieving the key ID (padID) in the interface using the keyboard and key standard parameters.

The instrument is made polyphonic so that every time a finger touches one of the pads, a new voice is allocated until t60 is reached. The X/Y position of the finger is retrieved and associated to the strike position parameter of the model so that more low frequencies are generated when the pad is hit at the center and vice versa.

To make things sound better, we can add our "usual" reverb to our synth using the -effect option of faust2smartkeyb.

Using Different Keyboards to Control Different Synths

This section presents a different mapping strategy than the one from the previous section to assign different synths to different keyboards. Thus, the goal of this short tutorial is to make an app with four parallel keyboard all controlling a different synthesizer.

The final product of this tutorial can be downloaded here.

Synth Implementation

The different synths that we use here are all based on a filtered waveform. We'll use the keyboard standard parameter to create a condition to activate a specific waveform in function of the touched keyboard.

import("stdfaust.lib");

// standard parameters
f = hslider("freq",300,50,2000,0.01);
bend = hslider("bend[midi:pitchwheel]",1,0,10,0.01) : si.smoothAndH(gate,0.999);
gain = hslider("gain",1,0,1,0.01);
s = hslider("sustain[midi:ctrl 64]",0,0,1,1); // for sustain pedal
t = button("gate");
y = hslider("y[midi:ctrl 1]",1,0,1,0.001) : si.smoo;
keyboard = hslider("keyboard",0,0,3,1) : int;

// fomating parameters
gate = t+s : min(1);
freq = f*bend;
cutoff = y*4000+50;

// oscillators
oscilators(0) = os.sawtooth(freq);
oscilators(1) = os.triangle(freq);
oscilators(2) = os.square(freq);
oscilators(3) = os.osc(freq);

// oscs are selected in function of the current keyboard
synths = par(i,4,select2(keyboard == i,0,oscilators(i))) :> fi.lowpass(3,cutoff) : *(envelope)
with{
  envelope = gate*gain : si.smoo;
};

process = synths <: _,_;

A series of four oscillators are put in parallel and are activated in function of the value of keyboard. They all go through the same lowpass filter (fi.lowpass) whose cutoff frequency is controlled by the Y position of the finger on the current keyboard. Note the use of MIDI CC 1 which corresponds to the expression wheel on a MIDI keyboard on this parameter.

The main drawback of using this technique is that part of the deactivated oscillators remain active which is quite inefficient from a computational standpoint. This problem can be solved by using the experimental "mute" feature of Faust implemented in master-mute branch but we wont detail its use here.

SmartKeyboard Configuration

The SmartKeyboard configuration is relatively simple for this example and only consists in four polyphonic keyboards in parallel:

declare interface "SmartKeyboard{
  'Number of Keyboards':'4',
  'Rounding Mode':'2',
  'Inter-Keyboard Slide':'0',
  'Keyboard 0 - Number of Keys':'13',
  'Keyboard 1 - Number of Keys':'13',
  'Keyboard 2 - Number of Keys':'13',
  'Keyboard 3 - Number of Keys':'13',
  'Keyboard 0 - Lowest Key':'60',
  'Keyboard 1 - Lowest Key':'60',
  'Keyboard 2 - Lowest Key':'60',
  'Keyboard 3 - Lowest Key':'60',
  'Keyboard 0 - Send Y':'1',
  'Keyboard 1 - Send Y':'1',
  'Keyboard 2 - Send Y':'1',
  'Keyboard 3 - Send Y':'1'
}";

The final product of this tutorial can be downloaded here.

MIDI Synth App With Touch Screen as Continuous Controller

In this section, we demonstrate how to make a MIDI controllable app where the mobile device's touch screen is used to control specific parameters of the synth continuously using two separate X/Y control surfaces. Thus, this instrument needs a piano keyboard in order to work.

The final product of this tutorial can be downloaded here.

The SmartKeyboard configuration for this instrument consists in a single keyboard with two keys. Each key implements a control surface:

declare interface "SmartKeyboard{
  'Number of Keyboards':'1',
  'Keyboard 0 - Number of Keys':'2',
  'Keyboard 0 - Send Freq':'0',
  'Keyboard 0 - Piano Keyboard':'0',
  'Keyboard 0 - Static Mode':'1',
  'Keyboard 0 - Send Key X':'1',
  'Keyboard 0 - Key 0 - Label':'Mod Index',
  'Keyboard 0 - Key 1 - Label':'Mod Freq'
}";

The synth implementation is pretty standard in that case and should be MIDI-compatible by declaring the usual standard parameters (freq, gain, gate, etc.).

import("stdfaust.lib");

f = hslider("freq",300,50,2000,0.01);
bend = hslider("bend[midi:pitchwheel]",1,0,10,0.01) : si.polySmooth(gate,0.999,1);
gain = hslider("gain",1,0,1,0.01);
key = hslider("key",0,0,1,1) : int;
kb0k0x = hslider("kb0k0x[midi:ctrl 1]",0.5,0,1,0.01) : si.smoo;
kb0k1x = hslider("kb0k1x[midi:ctrl 1]",0.5,0,1,0.01) : si.smoo;
s = hslider("sustain[midi:ctrl 64]",0,0,1,1);
t = button("gate");

// fomating parameters
gate = t+s : min(1);
freq = f*bend;
index = kb0k0x*1000;
modFreqRatio = kb0k1x;

envelope = gain*gate : si.smoo;

process = sy.fm((freq,freq + freq*modFreqRatio),index*envelope)*envelope <: _,_;

We use a "standard" fm synthesizer from Faust's synth.lib. Its modulation index (index) is controlled by the X position of any finger touching the left pad (first key on the keyboard). Similarly, the modulation frequency is controlled by the X position on the right pad. Finally, the envelope is also used to scale the modulation index to create nice natural attacks.

Non-Polyphonic Bowed Instrument: Physical Model Approach

The final product of this tutorial can be downloaded here.

In this section, we present the most complex SmartKeyboard mapping and configuration of this tutorial series. The goal is to control a non-polyphonic synthesizer (e.g., physical model; etc.) using a combination of different types of UI elements to design an app with the following interface:

Each keyboard represents an independent string and the surface at the bottom can be used as a "bow" by executing constant movements on it. We want the four "strings" to be constantly running just like if it was a "physical instrument".

To implement the interface described above, we need to declare 5 keyboards (4 actual keyboards and 1 control surface). We want to disable the voice allocation system and we want to activate a voice on start-up so that all strings are constantly running so we set Max Keyboard Polyphony to 0. Since we don't want the first 4 keyboards to send the X and Y position of fingers on the screen, we set Send X and Send Y to 0 for all these keyboards. Similarly, we don't want the fifth keyboard to send pitch information to the synth so we set Send Freq to 0 for that keyboard. Finally, we deactivate piano keyboard mode for the fifth keyboard to make sure that color doesn't change when the key is touch and that note names are not displayed.

declare interface "SmartKeyboard{
  'Number of Keyboards':'5',
  'Max Keyboard Polyphony':'0',
  'Rounding Mode':'1',
  'Keyboard 0 - Number of Keys':'19',
  'Keyboard 1 - Number of Keys':'19',
  'Keyboard 2 - Number of Keys':'19',
  'Keyboard 3 - Number of Keys':'19',
  'Keyboard 4 - Number of Keys':'1',
  'Keyboard 4 - Send Freq':'0',
  'Keyboard 0 - Send X':'0',
  'Keyboard 1 - Send X':'0',
  'Keyboard 2 - Send X':'0',
  'Keyboard 3 - Send X':'0',
  'Keyboard 0 - Send Y':'0',
  'Keyboard 1 - Send Y':'0',
  'Keyboard 2 - Send Y':'0',
  'Keyboard 3 - Send Y':'0',
  'Keyboard 0 - Lowest Key':'55',
  'Keyboard 1 - Lowest Key':'62',
  'Keyboard 2 - Lowest Key':'69',
  'Keyboard 3 - Lowest Key':'76',
  'Keyboard 4 - Piano Keyboard':'0',
  'Keyboard 4 - Key 0 - Label':'Bow'
}";

As for the synthesizer, because I'm tired of writing tutorials, I decided to replace the bowed string physical model by an FM synth similar to the one presented in the previous section :). Four parallel synths implement different "strings" (synthSet). The carrier frequency of each of them is changed when their associated keyboard it touched, otherwise it is held. Movements on the X axis of the control surface (velocity) are detected simply by subtracting the value of x to its delayed version (1 sample). Finally, the y axis of the control surface is used to detune the modulation frequency of the synths.

import("stdfaust.lib");

// parameters
f = hslider("freq",400,50,2000,0.01);
bend = hslider("bend",1,0,10,0.01);
keyboard = hslider("keyboard",0,0,5,1) : int;
key = hslider("key",0,0,18,1) : int;
x = hslider("x",0.5,0,1,0.01) : si.smoo;
y = hslider("y",0,0,1,0.01) : si.smoo;

// mapping
freq = f*bend;
// dirty motion tracker
velocity = x-x' : abs : an.amp_follower_ar(0.1,1) : *(8000) : min(1);

// 4 "strings"
synthSet = par(i,4,synth(localFreq(i),velocity)) :> _
with{
  localFreq(i) = freq : ba.sAndH(keyboard == i) : si.smoo;
  synth(freq,velocity) = sy.fm((freq,freq + freq*modFreqRatio),index*velocity)*velocity
  with{
    index = 1000;
    modFreqRatio = y*0.3;
  };
};

process = synthSet <: _,_;

Additional SmartKeyboard Customization

So now that you're an expert at turning the SmartKeyboard interface into anything you want, you might be tired of using the same boring black and white "keys" to design your interface. Here, we quickly show you how to replace them with something else.

In practice, there's only one type of UI element in SmartKeyboard: keys. A key can have 2 states (on/off) and can be black or white if using the default piano keyboard mode. In other words, there's a total of 4 possible representations of a key. Each of them is simply implemented through a picture. So all you have to do is to change the picture corresponding to key states in the app source code.

On iOS, this is very straight forward and all the pictures corresponding to key states and types can be found in Faust/img in the source code of the app generated by faust2smartkeyb if using the -source option. Naming conventions should speak by themselves. Make sure that your replacement pictures have the same format.

On Android, things are a bit more complicated because there is a different version of each picture for a wide range of screen sizes. Go in /app/src/main/res/ in the source code of the app generated by faust2smartkeyb if using the -source option. You should find a series of folders starting by drawable-[...]. Each of them contains a set of pictures for different key types and states. Check the size of each picture in each folder and make sure that you're replacement pictures have exactly the same size.

That's it!