Accuracy

The accuracy is the overall percentage of predictions without errors. It's derived from the confusion matrix.

Accuracy used for multi-label classifiers (taggers) is also referred to as the Hamming Score.

Interpretation / calculation

The accuracy is computed by the quotient of the sum of true positives (TP) and true negatives (TN) above the total number of predictions:

$$Accuracy = \frac{TP + TN}{TP + TN + FP + FN}$$

The values to depict the accuracy for a single-label classifier are taken over the confusion matrices of all classes.

For instance and semantic segmentors as well as object detectors, a confusion matrix is calculated by first checking if the predicted class is the same as in the ground truth, and then if the IoU is above a certain threshold. Often, 0.5 is used.

Whereas accuracy is very intuitive, it has one drawback: the accuracy doesn't tell you what kind of errors your model makes. At 1% miss-classification rate (99% accuracy), the error could be either caused by false positives (FP) or false negatives (FN). This information is important when you're evaluating a model for a specific use case, though. Take COVID-tests as an example: you'd rather have FPs than FNs.

More informative metrics are:

Code implementation

PyTorch

!pip install torchmetrics

import torch
import torchmetrics

from torchmetrics import Accuracy

target = torch.tensor([0, 1, 2, 3])
preds = torch.tensor([0, 2, 1, 3])

accuracy = Accuracy()
accuracy(preds, target)

Sklearn

from sklearn.metrics import accuracy_score

y_pred = [0, 2, 1, 3]
y_true = [0, 1, 2, 3]

accuracy_score(y_true, y_pred, normalize=False)

Tensorflow

\# importing the library
import tensorflow as tf

m = tf.keras.metrics.Accuracy()
m.update_state([1, 2, 3, 4], [0, 2, 3, 4])

print('Final result: ', m.result().numpy())

Further resources

Get AI confident. Start using Hasty today.

Our platform is completely free to try. Sign up today to start your two-month trial.