Neuton
Tiny ML

Make Edge Devices Intelligent
Exceptionally Tiny Models without Loss of Accuracy
Explore a novel approach to bringing extra capabilities to edge devices
Less than
1 KB
is an average model size achieved by our neural framework
Even tiny
8-bit
microcontrollers can be AI-driven and run robust neural networks
Up to
1,000 times
smaller models compared to other existing algorithms
Only
1 iteration
is enough to build an accurate, fast and incredibly tiny model
No-Code Automated Tiny ML Platform
Neuton is a no-code Tiny AutoML platform empowered by a patented Neural Network Framework under the hood. Offering a highly automated and transparent pipeline, Neuton doesn't require a lot of actions from the user's side. It helps to automatically build extremely compact and accurate models without additional compression and natively embed them into 8, 16, and 32-bit MCUs!
Community Projects
Hand Activity Recognition
Ultra-Tiny Solution of Daily Activities ​​Recognition
  • Incredibly small total footprint solution 
  • Recognizing similar complex hand movements 
  • Reproducible on your device
Predictive Maintenance
Predictive Maintenance of Compressor Water Pumps

Ever thought about how water enters the central heating pipes? The secret is the proper operation of water pumps. Learn how to use RSL10 sensors and Neuton to run models for timely pump maintenance.

Smart Human Interfaces
Handmade Drawing Recognition Interface as from my Smartphone

How to quickly develop TinyML models to recognize custom, user-defined drawn gestures on touch interfaces? Check out the project inspired by a gesture recognition feature on a simple smartphone.

Condition Monitoring
Detecting Unstable Electrical Grid with TinyML

Electricity permeates the entire infrastructure of modern cities, so it's really important to monitor and prevent overloads. Explore how to predict electrical grid stability with Neuton and Particle IoT.

Explore All Projects
Build Extremely Tiny Models. Run Them on the Smallest IoT Devices.
Neuton – the First Neural Network Framework Designed to Create Models with Minimal Size and without Loss of Accuracy
Neuton is based on a unique patented machine learning algorithm that forgoes error backpropagation and stochastic gradient descent, growing the network structure neuron by neuron. This helps to build neural networks without compromising between accuraсy and size:

with the excellent generalizing capability
with minimal size, often less than 1 Kb
without loss of accuracy
without compression techniques
5 Unique Features of Neuton Network Framework
Selective connections
Unique algorithm
Automatic structure growth
No manual search
Constant cross-validation
Selective approach to the connected features
A selective approach to the connected features helps to establish only necessary neuron connections while creating a model, not allowing the structure to grow randomly.
Unique patented global optimization algorithm
Neuton's algorithm enables the platform not to base its computation on error backpropagation and stochastic gradient descent. It provides the opportunity to successfully create compact models, minimizing the likelihood of hitting local minima.
Automatic neuron-by-neuron structure growth
The principle of neuron-by-neuron structure growth helps Neuton to learn from the general features to the most specific ones, and select the optimal model size and high accuracy, required to solve the task.
No manual search for neural network parameters
Neuton eliminates the time-consuming random manual search for neural network parameters (the number of layers and neurons in a layer, type of activation function, batch size, learning rate, etc.), and allows you to quickly and efficiently find the optimal structure.
Constant cross-validation
The step-by-step growth of the neural network makes it possible to cross-validate with the addition of each neuron and to increase the generalizing capabilities of the model.
AI at the Sensor Edge: Hand Activity Recognition on the ISPU
This model can recognize very similar activity with the never-before-seen tiniest footprint that allowed to embed it into a low-power resource-constrained Intelligent sensor.
Program RAM (kB) Data RAM (kB)
Total solution 3.1 1.2
Model 0.63 0.00
DSP 0.87 0.18
Inference engine 1.6 1.01
Network Neurons Weights
32 188
Estimated on Cortex-M4, 48MHz, compiled with Os optimization.
Accuracy > 98%, inference time less than 1 ms.
Evaluate the Uniqueness of Neuton's Approach by Comparing Benchmarks
Well-Known Magic Wand Case is 33 times Faster with Neuton
Total Footprint (kb) TFLM Neuton Neuton Advantages
Flash TinyML framework (model + inference engine + DSP) 79.96 5.42
14 times
smaller
Device drivers and business logic 93.47 93.47
SRAM TinyML framework (model + inference engine + DSP) 18.2 1.72
10 times
smaller
Device drivers and business logic 45.69 45.69
Inference time (us) 55 262 1 640
33 times
faster
Holdout validation Accuracy 0.93 0.94
0,7%
higher accuracy
Total Footprint (kb) TFLM Neuton
Flash
TinyML framework (model + inference engine + DSP) 79.96 5.42
Device drivers and business logic 93.47 93.47
Neuton Advantages
14 times
smaller
SRAM
TinyML framework (model + inference engine + DSP) 18.2 1.72
Device drivers and business logic 45.69 45.69
Neuton Advantages
10 times
smaller
Inference time (ms) TFLM Neuton
55 262 1 640
Neuton Advantages
33 times
faster
Holdout validation Accuracy TFLM Neuton
0.93 0.94
Neuton Advantages
0,7%
higher occuracy
We compare two models:
The trained TFLM Magic Wand model built into the Arduino SDK
The Neuton model trained on TF dataset from the Google repository
Both models are validated on the same holdout dataset (from the repository) and were tested on the same MCU (Arduino Nano 33 BLE Sense).
Neuton vs. Non-neural network algorithms
This comparison illustrates the ability to achieve compact sizes while maintaining fast inference with the help of neural networks. This opens up wide opportunities for working with more complex tasks where non-neural network algorithms show worse performance.
Case Holdout Accuracy Inference Time, us SRAM, kB FLASH, kB
Neuton PME* More accurate Neuton PME* Times faster Neuton PME* Times smaller Neuton PME* Times smaller
Gearbox Fault Diagnosis 0.84 0.74 12% 56 776 84 080 1.5 3.54 4.95 1.3 9.64 11.15 1.1
Air-Writing Digits Recognition 0.93 0.82 12% 18 172 52 000 2.9 2.57 4.92 1.9 8.91 9.79 1.1
Arrhythmia Diagnostic binary 0.93 0.72 23% 5 212 106 220 20 0.98 1.9 1.9 2.59 8.61 3.3
Arrhythmia Diagnostic multi 0.96 0.77 20% 14 232 N/A N/A 2.58 9.31 3.6 6.24 16.95 2.7
Holdout Accuracy
Case Neuton PME* More accurate
Gearbox Fault Diagnosis 0.84 0.74 12%
Air-Writing Digits Recognition 0.93 0.82 12%
Arrhythmia Diagnostic binary 0.93 0.72 23%
Arrhythmia Diagnostic multi 0.96 0.77 20%
Inference Time, us
Case Neuton PME* Times faster
Gearbox Fault Diagnosis 56 776 84 080 1.5
Air-Writing Digits Recognition 18 172 52 000 2.9
Arrhythmia Diagnostic binary 5 212 106 220 20
Arrhythmia Diagnostic multi 14 232 N/A N/A
SRAM, kB
Case Neuton PME* Times smaller
Gearbox Fault Diagnosis 3.54 4.95 1.3
Air-Writing Digits Recognition 2.57 4.92 1.9
Arrhythmia Diagnostic binary 0.98 1.9 1.9
Arrhythmia Diagnostic multi 2.58 9.31 3.6
FLASH, kB
Case Neuton PME* Times smaller
Gearbox Fault Diagnosis 9.64 11.15 1.1
Air-Writing Digits Recognition 8.91 9.79 1.1
Arrhythmia Diagnostic binary 2.59 8.61 3.3
Arrhythmia Diagnostic multi 6.24 16.95 2.7
Realized on 8-bit MCU with AVR GNU Toolchain (7.3.0-atmel3.6.1-arduino7) & 16 MHz CPU frequency
* In this comparison, the PME algorithm was used (pattern matching engine is a distance based classifier), which was automatically built by the SensiML service
Gearbox Fault Diagnosis
Check out how to apply the TinyML approach to detect broken tooth conditions in the gearbox. The dataset includes the vibration data recorded with 4 vibration sensors that were placed in 4 different directions, and under different load variations.
Air-Writing Digits Recognition
The goal is to correctly identify the digit that was written by a user in the air. The dataset contains 200 samples for each handwritten digit (from 0 to 9). Each sample was recorded for 2 seconds with a frequency of 100 Hz.
Arrhythmia Diagnostic
Binary Classification and Multiclass
Predict the possibility of cardiac arrhythmias and identify the type of arrhythmia based on the samples of ambulatory ECG recordings.
Embed into Really Tiny Edge Devices
Neuton's silicon agnostic models can be deployed on microcontrollers and small computing devices with limits as challenging as the following characteristics:
Total on-device memory < 100 kB
Energy – ɥW scale
Processor < 100 MHz
8-,16- and 32-bit capacity
Create Tiny Models without Сompression
Neuton models maintain all of their original characteristics, without any reduction of accuracy. Neuton does not reduce the model size after its creation.
Neuton does not use quantization, pruning, clustering, nor distillation.
Neuton Pricing and Options
Zero Gravity
Self-service free unlimited plan.
Enterprise Plan
Get assisted and supported
Neuton Services are absolutely free of charge. Get up to $500 in credits for infrastructure costs on Google Cloud Platform.
Training
Unlimited number of models
Data Preprocessing
Feature Engineering
Prediction
Unlimited number of predictions via the web interface, REST API or downloaded models
Downloadable C library for Cortex M0, Cortex M4 and STMicro ISPU
Explainability
Exploratory Data Analysis
Model Interpreter
Model Quality Diagram
Model Decay Indicator
Infrastructure
Only pay for what you use. Google Cloud Platform Costs
Minimum 100 hours of training included
Build AI-driven IoT devices using the smallest neural networks and taking advantage of professional services from Neuton’s team

To launch large-scale IoT projects, Neuton offers a special Enterprise Plan with individual approach and a full cycle of end-to-end data science services.
News
Discover innovative Neuton solutions at Embedded World 23
02.21.2023
Discover innovative Neuton solutions at Embedded World 23

We will demonstrate two live demos that perfectly illustrate the uniqueness of our approach: Keyword Spotting –> 40 Kb in total footprint and Human activities recognition –> 3 Kb in total footprint.

Automated Design of Tiny Machine Learning Models: Part 2
10.26.2022
Automated Design of Tiny Machine Learning Models: Part 2

Meet the long-awaited IEEE newsletter issue with the release of the second part of A Practical Guide with 3 full-cycle use cases illustrating the innovative Automated Design of Tiny Machine Learning Models.

Arm Tech Talk: Solving Real-World Challenges with Arm and Neuton
07.26.2022
Arm Tech Talk: Solving Real-World Challenges with Arm and Neuton

Neuton.AI, represented by our embedded engineer, Sumit Kumar, will participate in the new Arm’s AI Tech Talk on September 20th, at 8:00 AM PT. You don't want to miss this!

Neuton's Divine Journey
To make the world a better place by augmenting human ingenuity through wider adoption and usage of artificial intelligence, while having a transformative impact on the economy, all industries, their associated scientific breakthroughs and overall quality of life.
Support Library
Compatible with products of major semiconductor vendors
Infrastructure Credits
Use Neuton's free Zero Gravity plan, accompanied by eligible free trial credits to cover infrastructure costs.
Register as a new customer within the Google Cloud Platform to be eligible for up to $300 in free trial credits.
Corporate customers are also eligible for an additional $200 in credits on top of the $300 free trial credits. In order to be qualified to redeem the additional $200 credits, customers must register as a new customer with Neuton using a corporate email domain (no personal email accounts allowed, e.g. Gmail, Yahoo).
Once the customer has exceeded the available credits, infrastructure fees will then be changed until the subscription is terminated. The subscription may be canceled at any time by cancelling the pricing plan.
Google Cloud Platform Costs:
To build models with your data, an IT infrastructure is required. Google Cloud Platform (GCP) uses virtual machines. Every model can be trained in parallel without losing speed. That is why each dataset training requires a separate virtual machine with enough GPU. The platform automatically chooses and provisions the right infrastructure for your dataset to ensure fast learning.
The costs are calculated as follows:
Storage - $0.04 GB/month
Training - The cost per model for the training of one dataset depends on the number of rows in it and varies from $0.88-4.82
Training Costs per dataset size, per hour of training, per model
0-1000 rows of data - $0.88
1001-5000 rows of data - $1.44
5001-50,000 rows of data - $2.56
>50,000 rows of data - $4.82
For new accounts, GCP provides up to $500 credits, for the next 90 days. $500 is enough for at least 100 hours of training on the platform!
Stay updated, join the community