If you have ever worked on an Object Detection, Instance Segmentation, or Semantic Segmentation tasks, you might have heard of the popular Intersection over Union (IoU) Machine Learning metric. On this page, we will:
Let’s jump in.
In general, Deep Learning algorithms combine Classification and Regression tasks in various ways. For example, in the Object Detection task, you need to predict a bounding box (Regression) and a class inside the bounding box (Classification). In real life, it might be challenging to evaluate all at once (mean Average Precision can help you with that), so there are separate metrics for the Regression and Classification parts of the task. This is where Intersection over Union comes into the mix.
To define the term, in Machine Learning, IoU means Intersection over Union - a metric used to evaluate Deep Learning algorithms by estimating how well a predicted mask or bounding box matches the ground truth data. Additionally, for your information, IoU is referred to as the Jaccard index or Jaccard similarity coefficient in some academic papers.
So, to evaluate a model using the IoU metric, you need to have:
Fortunately, Intersection over Union is a highly intuitive metric, so you should not experience any challenges in understanding it. IoU is calculated by dividing the overlap between the predicted and ground truth annotation by the union of these.
It is not a problem if you are unfamiliar with the mathematical notation because the Intersection over Union formula can be easily visualized.
You might be wondering why we are calculating some overlap/union area, not the (x, y) pairs straightaway. In a real-life scenario, it is highly unlikely that the predicted coordinates will exactly match the ground truth ones for various reasons (basically, the model’s parameters). Still, we need a metric to evaluate the bounding boxes (and masks), so it seems reasonable to reward the model for the predictions that heavily overlap with the ground truth.
So, the Intersection over Union algorithm is as follows:
In the IoU case, the metric value interpretation is straightforward. The bigger the overlapping, the higher the score, the better the result.
The best possible value is 1. Still, from our experience, it is highly unlikely you will be able to reach such a score.
Our suggestion is to consider IoU > 0.95 as an excellent score, IoU > 0.7 as a good one, and any other score as the poor one. Still, you can set your own thresholds as your logic and task might vary highly from ours.
Let’s say that we have two bounding boxes with the coordinates as presented in the image below.
In such a case:
Intersection over Union is widely used in the industry (especially for the Deep Learning tasks), so all the Deep Learning frameworks have their own implementation of this metric. Moreover, IoU is relatively simple in its concept, so you can manually code it without any issues. For this page, we prepared two code blocks featuring using Intersection over Union in Python. In detail, you can check out:
Hello, thank you for using the code provided by Hasty. Please note that some code blocks might not be 100% complete and ready to be run as is. This is done intentionally as we focus on implementing only the most challenging parts that might be tough to pick up from scratch. View our code block as a LEGO block - you can’t use it as a standalone solution, but you can take it and add to your system to complement it. If you have questions about using the tool, please get in touch with us to get direct help from the Hasty team.
import numpy as np
SMOOTH = 1e-6
def iou_numpy(outputs: np.array, labels: np.array):
outputs = outputs.squeeze(1)
intersection = (outputs & labels).sum((1, 2))
union = (outputs | labels).sum((1, 2))
iou = (intersection + SMOOTH) / (union + SMOOTH)
thresholded = np.ceil(np.clip(20 * (iou - 0.5), 0, 10)) / 10
return thresholded # Or thresholded.mean()
Hello, thank you for using the code provided by Hasty. Please note that some code blocks might not be 100% complete and ready to be run as is. This is done intentionally as we focus on implementing only the most challenging parts that might be tough to pick up from scratch. View our code block as a LEGO block - you can’t use it as a standalone solution, but you can take it and add to your system to complement it. If you have questions about using the tool, please get in touch with us to get direct help from the Hasty team.
import torch
SMOOTH = 1e-6
def iou_pytorch(outputs: torch.Tensor, labels: torch.Tensor):
# You can comment out this line if you are passing tensors of equal shape
# But if you are passing output from UNet or something it will most probably
# be with the BATCH x 1 x H x W shape
outputs = outputs.squeeze(1) # BATCH x 1 x H x W => BATCH x H x W
intersection = (outputs & labels).float().sum((1, 2)) # Will be zero if Truth=0 or Prediction=0
union = (outputs | labels).float().sum((1, 2)) # Will be zzero if both are 0
iou = (intersection + SMOOTH) / (union + SMOOTH) # We smooth our devision to avoid 0/0
thresholded = torch.clamp(20 * (iou - 0.5), 0, 10).ceil() / 10 # This is equal to comparing with thresolds
return thresholded # Or thresholded.mean() if you are interested in average across the batch
Also, there is the Generalized Intersection over Union (GIoU) metric built on top of the original IoU. It is less popular as it is often used to evaluate underperforming models. In short, the value of Generalized IoU increases as the predicted mask or bounding box moves towards the ground truth giving Data Scientists the feeling that their experiments are successful and moving in the right direction. Please refer to the original post to learn more about the GIoU.
Hasty is a unified agile ML platform for your entire Vision AI pipeline — with minimal integration effort for you.
Start for free Request a demo