Binary Cross-Entropy Loss

Binary Cross-Entropy loss is a special case of Cross-Entropy loss used for multilabel classification (taggers). It is the cross entropy loss when there are only two classes involved. It is reliant on Sigmoid activation functions.

Mathematically, it is given as,

$$BinaryC.E=-\sum_i^2 t_i log(p_i)$$

Where €€t_i€€ is the true label and €€p_i€€ is the probability of the €€i^{th}€€ label.

Code implementation

PyTorch


# importing the library
import torch
import torch.nn as nn

input = torch.randn(3, 5, requires_grad=True)

# Binary Cross-Entropy Loss

target = torch.ones([10, 64], dtype=torch.float32)  # 64 classes, batch size = 10
output = torch.full([10, 64], 1.5)  # A prediction (logit)

pos_weight = torch.ones([64])  # All weights are equal to 1
criterion = torch.nn.BCEWithLogitsLoss(pos_weight=pos_weight)
criterion(output, target)  # -log(sigmoid(1.5))

TensorFlow


# importing the library
import tensorflow as tf

y_true = [[0., 1.], [0., 0.]]
y_pred = [[0.6, 0.4], [0.4, 0.6]]

# Using 'auto'/'sum_over_batch_size' reduction type.
bce = tf.keras.losses.BinaryCrossentropy()
bce(y_true, y_pred).numpy()


# Calling with 'sample_weight'.
bce(y_true, y_pred, sample_weight=[1, 0]).numpy()

Further Resources

Last updated on Jun 01, 2022

Get AI confident. Start using Hasty today.

Automate 90% of the work, reduce your time to deployment by 40%, and replace your whole ML software stack with our platform.