During training, the Neuton Platform makes a real-time dashboard available so you can monitor
the quality of the model, the number of coefficients as well as the model size.
Once the most optimal model quality and size are achieved, training will be stopped
automatically. Alternatively, you may also manually stop training once the model is deemed
consistent and you have achieved your target model requirements.
2. Inference Running
After successful training, the archive containing the C/C++ library with the model for
deployment will be generated.
The library contains the following files:
neuton.h - header file of library
neuton.c - library source code
model/model.h - model header file
StatFunctions.h / StatFunctions.c - statistical functions for preprocessing
All files are an integral part of the library and will be used unchanged.
The library is written in accordance with the C99 standard, so it is quite universal and does
not have strict requirements for the hardware. The ability to use the library depends mainly on
the amount of memory available for its operation.
The deployment consists of the following steps:
1. Copying all files from the archive to the project and including the header file of the
library.
2. Creating a float array with model inputs and passing it to `neuton_model_set_inputs`
function.
3. Calling `neuton_model_run_inference` and processing the results.