Neuton key value proposition
1. Self Signup
Easy and intuitive registration and subscription process
No need to schedule an appointment with a sales representative to get access to the service
2. Single Web Interface
Single web interface for both training and prediction
Full model creation process is performed in a single window - from data upload to predictions. Wizard walks you through the steps for creating a model→Upload→Train→Predict
Web interface for predictions: batches (upload/download csv files)
No need to juggle multiple portals or menus for full control including working with VM
3. Simple and straightforward interface
Easy model creation in “3 clicks”
Intuitive and user friendly workflow with minimum technical terms
No coding required for infrastructure provisioning and setup
Single dashboard displays all AI models, their statuses, target metrics and VM statuses
Training can be stopped and resumed in one click from the same point, thus saving time and infrastructure costs
Clear web prediction interface, no need to transform a dataset to the specific JSON format (data is uploaded in a raw csv file)
Prediction results can be viewed within the web interface or downloaded
4. Fully automated process
Automated infrastructure provisioning and deprovisioning includes:
Creation of buckets for datasets
Enablement of Virtual Machines for training
VM deprovisioning when an experiment is complete
Provisioning of an allocated VM for every experiment run, thus avoiding resource consumption conflicts between different experiments
Automated selection of settings (task type, target metric, preprocessing and time limitations)
Advanced preprocessing and feature engineering. Neuton even fills in missing values using intelligent algorithms, can combine up to 4 original features during feature engineering, and uses supporting models to process raw datasets.
Enabling Predictions (model deployment in the cloud) with a single click of a mouse
Neuton REST API can be used to augment your device or service with AI capabilities. We offer examples of how to use it with several different programming languages.
1. Model Interpreter
The Model Interpreter is a tool that allows you to visually see the logic, direction and the effects of changes in individual variables in the model. It also shows the importance of these variables in relation to the target variable.
2. Feature Importance Matrix (FIM)
After the model has been trained, the platform displays a chart with the 10 features that had the most significant impact on the model prediction power. You can also select any other features to see their importance. FIM also has 2 modes, displaying either only the original features or the features after feature engineering. For classification tasks you can see the feature importance for every class.
3. Feature Influence Indicator
Feature Influence Indicator shows feature influence on each prediction.
4. Exploratory Data Analysis (EDA)
Exploratory Data Analysis (EDA) is a tool that automates graphical data analysis and highlights the most important statistics in the context of a single variable, overall data, interconnections, and in relation to the target variable in a training dataset. Given the potentially wide feature space, up to 20 of the most important features with the highest statistical significance are selected for EDA based on machine learning modeling.
5. Confidence Interval
The Confidence Interval, for regression problems, shows in what range the predicted value can change and with what probability.
6. Extensive list of supported Metrics
Extensive list of supported Metrics allows users to evaluate the model from different angles. There are 6 metrics for regression: Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), Root Mean Squared Logarithmic Error (RMSLE), Coefficient of Determination (R2), Mean Squared Error (MSE), Root Mean Squared Percentage Error (RMSPE), 9 metrics for binomial classification: Accuracy, Balanced Accuracy, Precision, Recall, F1 Score, Gini, ROC AUC, Lift, and 9 metrics for multinomial classification: Accuracy, Balanced Accuracy, Macro Average Precision, Weighted Average Precision, Macro Average Recall, Weighted Average Recall, Macro Average F1 Score, Weighted Average F1 Score.
7. Model-to-Data Relevance Indicator
Model-to-data Relevance Indicator calculates the statistical differences between the data uploaded for predictions and the data used for model training. Significant differences in the data may indicate metric decay (model prediction quality degradation).
8. Historical Model-to-Data Relevance Indicator
Historical Model-to-data Relevance is an excellent signal for models to retrain. This indicator is designed even for downloadable models, which allows to manage a model lifecycle even outside the platform.
9. Model Quality Diagram
Model Quality Diagram simplifies the process of evaluating the quality of the model, and also allows users to look at the model from the perspective of various metrics simultaneously in a single graphical view. We offer an extensive list of metrics describing the quality popular in the data science community.
10. Coming Soon: Validate Model on New Data
Validate Model on New Data Validate Model on New Data shows model metrics on new data to help determine whether the model should be retrained to reflect the statistical changes and dependencies in new data. It also shows metrics in multidimensional space (Model Quality Diagram).
Self-growing structure, has 5 fold cross-validation with minimum overfitting
Up to 1000 times smaller (neurons, coefficients, Kb size) models
Up to 1000 times faster prediction
Can be built into microcontrollers and other small compute devices
In most cases model accuracy is higher than those of AutoML Giants and most Venture Backed AutoML Companies
Neuton works perfectly with datasets of any size, and unlike Google or Amazon you can train a model on data with fewer than 900 rows just as effectively as with datasets that are many Gigabytes in size.
Most of Neuton’s competitors build multiple models in an effort to determine the best option. Neuton effectively and efficiently solves most problems simply by utilizing our single neural framework. The result is an overall reduction in infrastructure costs, a savings passed along to the user.
Unique Neural Network training algorithm guaranteeing the best model predictive power
You can download a Neuton model and use it online or offline without subscription. The archive contains the model in binary format and other supporting scripts and objects that, along with a simple python script, pulls it all together when predicting on new data. The downloadable model is absolutely free to use after the initial download, and requires no further access to the Neuton Platform nor license key.
Faster Training (in comparison to those of AI/AutoML Giants and most Venture Backed AI/AutoML Companies)
Progress bar makes it clear when the training process is about to complete
You can both stop and resume training in one easy click
Transparent pricing and billing
Free plan includes free subscription to Neuton and up to $500 of credits to cover infrastructure costs
Infrastructure costs are much lower than those of competitors (especially most of AI/AutoML Giants)
Users pay for infrastructure only when training a model or making web predictions. Training can be stopped and resumed from the same point thus saving time and infrastructure costs. Training infrastructure also automatically deprovisions when training stops to avoid unnecessary infrastructure costs.
Users are notified immediately about any additional costs (for example, while provisioning a VM for prediction)
Users can download a model to be used without Neuton, thus using it absolutely free, without any additional infrastructure charges.
Unlike some competitors Neuton works with time series and text data. While there are additional settings for the Time Series analysis, our NLP module is absolutely automated. If a text column is present in your data, it will be processed according to NLP best practices to date.
Missing values imputation using tree based machine learning models
Data cleaning
Advanced categorical variables transformation
Outliers treatment
Automatic date recognition and processing
Analysis of binomial and multinomial feature correlations with the target variable
Creation of feature interactions based on linear and tree-based models
Stay updated, join the community
slack