compute_metrics

improvelib.metrics.compute_metrics(y_true, y_pred, metric_type, y_prob = None)

Compute the specified set of metrics.

If the metric_type is 'regression' the metrics returned inclue: mse, rmse, pcc, scc, and r2.

If the metric_type is 'classification' the metrics returned inclue: acc, recall, precision, f1, kappa, and bacc.

If the metric_type is 'classification' and y_prob is specfied, the metrics returned will also include: roc_auc and aupr.

Used in compute_performance_scores and can be used directly.

Parameters:

y_truenp.ndarray

Ground truth.

y_prednp.ndarray

Predictions made by the model.

metric_typestr

Type of metrics to compute (‘classification’ or ‘regression’).

y_probnp.ndarray, optional

Target scores made by the classification model. Optional, defaults to None.

Returns:

scoresdict

A dictionary of evaluated metrics. Keys for regression: [“mse”, “rmse”, “pcc”, “scc”, “r2”] Keys for classification without y_prob: [“acc”, “recall”, “precision”, “f1”, “kappa”, “bacc”] Keys for classification with y_prob: [“acc”, “recall”, “precision”, “f1”, “kappa”, “bacc”, “roc_auc”, “aupr”]

Raises:

ValueError

If an invalid metric_type is provided.

Example

compute_metrics is called by compute_performance_scores, which also saves the scores.

You can use compute_metrics to access scores, for example during model training like so:

import improvelib.metrics as metr
scores = metr.compute_metrics(y_true, y_pred, metric_type)
mse_to_monitor = scores['mse']