Neuton Platform Structure
The Neuton Platform enables fast and seamless creation of optimal models, offering a high level of automation in data processing and neural network pipeline generation. In addition, the platform significantly optimizes cloud resource usage.

Below you will find an overview which showcases the core components that Neuton leverages in its workflow to provide exceptional results.
Make Predictions
Data Validation

Prior to sending data to Model Training, Neuton automatically determines the following:

  • Data format
  • Type of data variables
  • Data qualification for training (based on empty values, duplicates, etc.)
  • Task type being solved
  • Optimization metrics suitable for each problem type

The system immediately notifies about all data errors, if any, in a comprehensible format.

Data Preprocessing

After successful Data Validation, data analysis and transformation start automatically, implying the following actions:

  • Imputation of missing values leveraging tree-based ML models
  • Data cleaning
  • Advanced conversion of categorical variables
  • Outliers identification and processing
  • Automatic data recognition and processing
  • Evaluation of binomial and multinomial feature correlations containing the target variable
  • Setup of feature interactions based on linear and tree-based models

Learn more about Neuton Differentiators

Feature Engineering

Model quality can be increased by applying the automatic generation of new variables. Feature Engineering is available in Advanced Mode and performs the following actions:

  • Analysis of binomial and multinomial feature correlations with the target variable
  • Creation of feature interactions based on linear models
  • Creation of feature interactions based on tree-based models
  • Search, examination and removal of mutually correlated features
  • Feature importance ranking
  • TF-IDF token generation for detected text fields

Learn more about Neuton Differentiators

Neuton Neural Network Creation

Neuton’s Neural network creation is the core component of the training phase. The construction of a neural network starts automatically once all data preparation steps are complete.

The Neuton neural network is a self-growing neural network resulting in extremely compact models with the following features:

  • Applying no modifications to the stochastic gradient descent descent but utilizing a new efficient global optimization algorithm that allows to develop the optimal network structures during training
  • Automatic identification of the global minima of the error function
  • Solving the problem of vanishing gradients
  • Creation of optimal and compact models that work extremely fast due to having a smaller number of neurons and coefficients

Learn more about Neuton Framework

Web Interface

Results of the new data predictions can be viewed or downloaded by means of a user-friendly web interface in just a few clicks without any coding.

API Request

Neuton’s REST API functionality can be used as a tool to augment your device or service with AI capabilities. The platform provides examples of its implementation in several different programming languages.

Download Model

The downloadable option simplifies the deployment of Neuton models even to edge devices or microcontrollers. The downloadable solution can be used without any tiebacks to the Neuton platform and requires no Internet connection or special licensing. The Neuton Model Viewer allows for you to view the parameters of the neural network generated by Neuton.

Model Quality Evaluation Tools

Model quality evaluation is an essential condition for building efficient machine learning solutions.

  • An extensive list of supported metrics enables users to assess the model from varying perspectives. The list includes 6 metrics for regression, 9 metrics for binomial classification and 9 metrics for multinomial classification.

Learn more about Metrics

  • Model Quality Diagram and Model Quality Index simplifies the process of evaluating models, allowing users to have a model’s overview based on various metrics simultaneously in a single graph.

Learn more about Neuton Explainability Office

Prediction Quality

When building machine learning solutions, it is essential to evaluate the quality results (row-level explainability):

  • The Confidence Interval, used for indicating regression problems, shows in what range and with what probability the predicted value can vary.
  • Model-to-data Relevance Indicator estimates the statistical differences between the data uploaded for predictions and the data used to train a model. Significant data divergence may indicate metric decay (model prediction quality degradation).

Historical Model-to-data Relevance is an excellent indicator for models may need to be retrained. This feature is also available even for downloadable models, which allows managing a model lifecycle even outside the platform.

Exploratory Data Analysis

Exploratory Data Analysis (EDA) is a tool that automates graphical data analysis and highlights the most important statistics in the context of a single variable, your overall data, it’s interconnections and relation to the target variable in a training dataset. Given the potentially wide feature space, up to 20 of the most important features with the highest statistical significance are selected for EDA based on machine learning modeling

Learn more about Neuton Explainability Office

Feature Importance Matrix

Once the model is trained, the platform displays a chart with the 10 most important features that affect the model prediction power. Additionally, a user can select any other feature to check its relative importance. The Feature Importance Matrix has 2 modes, one displaying only the original features and the other displaying the features after feature engineering. When performing classification tasks, you can also see the feature importance for every class.

Learn more about Neuton Explainability Office

Model Interpreter

Along with quality evaluation of the model, it is also equally important to rightly interpret the prediction results.

  • The Model Interpreter is a tool that provides the visual demonstration of the logic, direction, and effects of changes in individual variables of the model. It also shows the importance of these variables in relation to the target variable.

Learn more about Model Interpreter

Google cloud
Automated Storage Provisioning

The platform automatically executes provisioning and de-provisioning of storage for each respective model to ensure maximum data security.

Automated Virtual Machine Provisioning

During both the training and prediction phases, the platform automatically executes provisioning and deprovisioning of virtual machines with the most suitable configurations, based on dataset parameters.

During the prediction phase, virtual machine usage/time is controlled by the user through a user-friendly interface.

Stay updated, join the community