RTNeural
1.0.0
Real-time neural inferencing library
|
<picture> </picture>
A lightweight neural network inferencing engine written in C++. This library was designed with the intention of being used in real-time systems, specifically real-time audio processing.
Currently supported layers:
Currently supported activations:
Additional resources:
If you are using RTNeural as part of an academic work, please cite the library as follows:
RTNeural
is capable of taking a neural network that has already been trained, loading the weights from that network, and running inference. Some simple examples are available in the examples/
directory.
Neural networks are typically trained using Python
libraries including Tensorflow or PyTorch. Once you have trained a neural network using one of these frameworks, you can "export" the network weights to a json file, so that RTNeural
can read them. An implementation of the export process for a "sequential" Tensorflow model is provided in python/model_utils.py
, and can be used as follows.
For an example of exporting a model from PyTorch, see this example script.
Next, you can create an inferencing engine in C++ directly from the exported json file:
Before running inference, it is recommended to "reset" the state of your model (if the model has state).
Then, you may run inference as follows:
The code shown above will create the inferencing engine dynamically at run-time. If the model architecture is fixed at compile-time, it may be preferable to use RTNeural's API for defining an inferencing engine type at compile-time, which can significantly improve performance.
The above example code assumes that the trained model has been exported from TensorFlow. For loading PyTorch models, the RTNeural namespace RTNeural::torch_helpers
, provides helper functions for loading layers exported from PyTorch.
For more examples, see the examples/torch
directory.
RTNeural
is built with CMake, and the easiest way to link is to include RTNeural
as a submodule:
If you are trying to use RTNeural in a project that does not use CMake, please see the instructions below.
RTNeural
supports three backends, Eigen
, xsimd
, or the C++ STL. You can choose your backend by passing either -DRTNEURAL_EIGEN=ON
, -DRTNEURAL_XSIMD=ON
, or -DRTNEURAL_STL=ON
to your CMake configuration. By default, the Eigen
backend will be used. Alternatively, you may select your choice of backends in your CMake configuration as follows:
In general, the Eigen
backend typically has the best performance for larger networks, while smaller networks may perform better with XSIMD. However, it is recommended to measure the performance of your network with all the backends that are available on your target platform to ensure optimal performance. For more information see the benchmark results.
Note that you must abide by the licensing rules of whichever backend library you choose.
If you would like to build RTNeural with the AVX SIMD extensions, you may run CMake with the -DRTNEURAL_USE_AVX=ON
. Note that this flag will have no effect when compiling for platforms that do not support AVX instructions.
To build RTNeural's test suite, run cmake -Bbuild -DBUILD_TESTS=ON
, followed by cmake --build build
. To run the full testing suite, run ctest
from the build
folder. For more information, see tests/README.md
.
To build the performance benchmarks, run cmake -Bbuild -DBUILD_BENCH=ON
, followed by cmake --build build --config Release
. To run the layer benchmarks, run ./build/rtneural_layer_bench <layer> <length> <in_size> <out_size>
. To run the model benchmark, run ./build/rtneural_model_bench
.
To build the RTNeural examples run:
The example programs will then be located in build/examples_out/
, and may be run from there.
An example of using RTNeural within a real-time audio plugin can be found on GitHub here.
If you wish to use RTNeural in a project that doesn't use CMake, RTNeural can be included as a header-only library, along with a few extra steps.
RTNEURAL_DEFAULT_ALIGNMENT=16
RTNEURAL_DEFAULT_ALIGNMENT=32
RTNEURAL_USE_EIGEN=1
RTNEURAL_USE_XSIMD=1
<repo>/modules/Eigen
<repo>/modules/xsimd/include/xsimd
It may also be worth checking out the example Makefile.
Contributions to this project are most welcome! Currently, there is a need for the following improvements:
General code maintenance and documentation is always appreciated as well! Note that if you are implementing a new layer type, it is not required to provide support for all the backends, though it is recommended to at least provide a "fallback" implementation using the STL backend.
Please thank the following individuals for their important contributions:
RTNeural is currently being used by several audio plugins and other projects:
If you are using RTNeural in one of your projects, let us know and we will add it to this list!
RTNeural is open source, and is licensed under the BSD 3-clause license.
Enjoy!