Metrics#
aitlas.metrics.classification module#
Metrics for classification tasks.
- class AccuracyScore(**kwargs)[source]#
Bases:
BaseMetric
Accuracy score class, inherits from BaseMetric.
- name = 'accuracy'#
- key = 'accuracy'#
- calculate(y_true, y_pred)[source]#
Computes the Accuracy score.
Given model predictions for a target variable, it calculates the accuracy score as the number of correct predictionsdivided by the total number of predictions.
- Parameters:
y_true (array-like of arbitrary size) – The ground truth values for the target variable.
y_pred (array-like of identical size as y_true) – The prediction values for the target variable.
- Returns:
A number in [0, 1] where, 1 is a perfect classification.
- Return type:
- class AveragedScore(**kwargs)[source]#
Bases:
BaseMetric
Average score class. Inherits from BaseMetric.
- calculate(y_true, y_pred)[source]#
It calculates the score for each class and then averages the results. The type of average is {‘micro’, ‘macro’, ‘weighted’}:
- *’micro’: Calculate metrics globally by counting the total true positives,
false negatives and false positives.
- *’macro’: Calculate metrics for each label, and find their unweighted
mean. This does not take label imbalance into account.
- *’weighted’: Calculate metrics for each label, and find their average, weighted
by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance.
- Parameters:
y_true (array-like) – The ground truth labels
y_pred (array-like) – The predicted labels
- Returns:
A dictionary with the micro, macro and weighted average scores
- Return type:
- Raises:
ValueError – If the shapes of y_pred and y_true do not match.
- class PrecisionScore(**kwargs)[source]#
Bases:
AveragedScore
Precision score class, inherits from AveragedScore.
- name = 'precision'#
- key = 'precision'#
- class RecallScore(**kwargs)[source]#
Bases:
AveragedScore
Precision score class, inherits from AveragedScore.
- name = 'recall'#
- key = 'recall'#
- class F1Score(**kwargs)[source]#
Bases:
AveragedScore
- name = 'f1 score'#
- key = 'f1_score'#
aitlas.metrics.segmentation module#
Metrics for segmentation tasks.
- class F1ScoreSample(**kwargs)[source]#
Bases:
BaseMetric
Calculates the F1 score metric for binary segmentation tasks.
- name = 'F1 Score'#
- key = 'f1_score'#
- calculate(y_true, y_pred, beta=1, eps=1e-07)[source]#
Calculate the F1 Score.
- Parameters:
- Returns:
F1 score
- Return type:
- Raises:
ValueError – If the shapes of y_pred and y_true do not match.
- class IoU(**kwargs)[source]#
Bases:
BaseMetric
Calculates the Intersection over Union (IoU) metric for binary segmentation tasks.
- name = 'IoU'#
- key = 'iou'#
- calculate(y_true, y_pred, eps=1e-07)[source]#
Calculate the IoU score.
- Parameters:
- Returns:
IoU score
- Return type:
- Raises:
ValueError – If the shapes of y_pred and y_true do not match.
- class Accuracy(**kwargs)[source]#
Bases:
BaseMetric
Calculates the accuracy metric.
- name = 'Accuracy'#
- key = 'accuracy'#
- class DiceCoefficient(**kwargs)[source]#
Bases:
BaseMetric
A Dice Coefficient metic, used to evaluate the similarity of two sets.
Note
More information on its Wikipedia page: https://en.wikipedia.org/wiki/S%C3%B8rensen%E2%80%93Dice_coefficient
- name = 'DiceCoefficient'#
- key = 'dice_coefficient'#
- calculate(y_true, y_pred)[source]#
Method to compute the Dice coefficient.
Given two sets X and Y, the coefficient is calculated as:
\[DSC = {2 * | X intersection Y |} / {|X| + |Y|}, where |X| and |Y| are the cardinalities of the two sets.\]Note
Based on the implementation at: CosmiQ/cresi
- Parameters:
- Returns:
A number in [0, 1] where 0 equals no similarity and 1 is maximum similarity.
- Return type:
- Raises:
ValueError – If the shapes of y_pred and y_true do not match.
- class FocalLoss(alpha=1, gamma=2, logits=True, reduce=True, **kwargs)[source]#
Bases:
BaseMetric
Class for calculating Focal Loss, a loss metric that extends the binary cross entropy loss. Focal loss reduces the relative loss for well-classified examples and puts more focus on hard, misclassified examples. Computed as: .. math:: alpha * (1-bce_loss)**gamma
Note
For more information, refer to the papers: https://paperswithcode.com/method/focal-loss, and: https://amaarora.github.io/2020/06/29/FocalLoss.html
Intilisation.
- Parameters:
alpha (int) – Weight parameter. Default is 1.
gamma (int) – Focusing parameter. Default is 2.
logits (bool) – Controls whether probabilities or raw logits are passed. Default is True.
reduce (bool) – Specifies whether to reduce the loss to a single value. Default is True.
kwargs – Any key word arguments to be passed to the base class
- name = 'FocalLoss'#
- key = 'focal_loss'#
- calculate(y_true, y_pred)[source]#
Method to compute the focal loss.
Note
Based on the implementation at: https://www.kaggle.com/c/tgs
- Parameters:
y_true – The ground truth values for the target variable. Can be array-like of arbitrary size. :type y_true: list or numpy array :param y_pred: The prediction values for the target variable. Must be of identical size as y_true. :type y_pred: list or numpy array :return: The focal loss between y_pred and y_true. :rtype: float :raises ValueError: If the shapes of y_pred and y_true do not match.
- class CompositeMetric(metrics=None, weights=None, **kwargs)[source]#
Bases:
BaseMetric
A class for combining multiple metrics.
Initialisation.
- Parameters:
- Raises:
ValueError – If the length of metrics and weights is not equal or if the sum of weights is not equal to one.
- name = 'CompositeMetric'#
- key = 'composite_metric'#
- calculate(y_true, y_pred)[source]#
Method to calculate the weighted sum of the metric values.
- Parameters:
- Returns:
The weighted sum of each metric value.
- Return type:
- Raises:
ValueError – If the shapes of y_pred and y_true do not match.