The Neuton platform enables users to solve machine learning tasks in various domains. This is made possible by processing sensor data (gyroscopes, accelerometers, magnetometers, electromyography, etc).
To train the model, a dataset in a CSV format is needed.
A typical dataset contains independent features usually referred to as “predictors” and dependent variables which the model is learning to predict, referred to as “target” (label) or target variable. The dataset is represented in a CSV format where each row of data represents the feature values. The first row in the table is used for column names.
The
uploaded data can be both fully prepared or require preprocessing.
For signal processing, the platform provides a wide range of functions, such as
Windowing,
Feature Extraction, and
Feature Selection.
To build the most compact models with high inference speed and optimal accuracy, users can control the calculation settings within the model. The model accuracy is calculated automatically on the split portion of the training dataset. Additionally, users can upload an independent dataset so that validation and calculation of metrics can be performed on them.
During training, the model is looking for patterns and dependencies between the predictors and the target variable.
After the model has been trained, the user gets
a ready-to-use C Library with a total footprint score. The user is free to embed the C Library in the device or evaluate the model's quality on test data using the
Inference Runner, which is located in the Artifacts folder of the same downloadable archive.
The full machine learning workflow and pipeline within Neuton consist of 3 simple steps:
Select data for training
Train your model
Run inference on a device