1. Model Interpreter
The Model Interpreter is a tool that allows you to visually see the logic, direction and the
effects of changes in individual variables in the model. It also shows the importance of these
variables in relation to the target variable.
2. Feature Importance Matrix (FIM)
After the model has been trained, the platform displays a chart with the 10 features that had
the most significant impact on the model prediction power. You can also select any other
features to see their importance. FIM also has 2 modes, displaying either only the original
features or the features after feature engineering. For classification tasks you can see the
feature importance for every class.
3. Feature Influence Indicator
Feature Influence Indicator shows feature influence on each prediction.
4. Exploratory Data Analysis (EDA)
Exploratory Data Analysis (EDA) is a tool that automates graphical data analysis and highlights
the most important statistics in the context of a single variable, overall data,
interconnections, and in relation to the target variable in a training dataset. Given the
potentially wide feature space, up to 20 of the most important features with the highest
statistical significance are selected for EDA based on machine learning modeling.
5. Confidence Interval
The Confidence Interval, for regression problems, shows in what range the predicted value can
change and with what probability.
6. Extensive list of supported Metrics
Extensive list of supported Metrics allows users to evaluate the model from different angles.
There are 6 metrics for regression: Mean Absolute Error (MAE), Root Mean Squared Error (RMSE),
Root Mean Squared Logarithmic Error (RMSLE), Coefficient of Determination (R2), Mean Squared
Error (MSE), Root Mean Squared Percentage Error (RMSPE), 9 metrics for binomial classification:
Accuracy, Balanced Accuracy, Precision, Recall, F1 Score, Gini, ROC AUC, Lift, and 9 metrics for
multinomial classification: Accuracy, Balanced Accuracy, Macro Average Precision, Weighted
Average Precision, Macro Average Recall, Weighted Average Recall, Macro Average F1 Score,
Weighted Average F1 Score.
7. Model-to-Data Relevance Indicator
Model-to-data Relevance Indicator calculates the statistical differences between the data
uploaded for predictions and the data used for model training. Significant differences in the
data may indicate metric decay (model prediction quality degradation).
8. Historical Model-to-Data Relevance Indicator
Historical Model-to-data Relevance is an excellent signal for models to retrain. This indicator
is designed even for downloadable models, which allows to manage a model lifecycle even outside
9. Model Quality Diagram
Model Quality Diagram simplifies the process of evaluating the quality of the model, and also
allows users to look at the model from the perspective of various metrics simultaneously in a
single graphical view. We offer an extensive list of metrics describing the quality popular in
the data science community.
10. Coming Soon: Validate Model on New Data
Validate Model on New Data
Validate Model on New Data shows model metrics on new data to help determine whether the model
should be retrained to reflect the statistical changes and dependencies in new data. It also
shows metrics in multidimensional space (Model Quality Diagram).