Cross-entropy loss is a widely used alternative for the squared error. It is used when node activations can be understood as representing the probability that each hypothesis might be true, i.e., when the output is a probability distribution. Thus, it is used as a loss function in neural networks with softmax activations in the output layer.
Cross entropy indicates the distance between what the model believes the output distribution should be, and what the original distribution really is.
Where is the true label and is the probability of the label.
The goal for cross-entropy loss is to compare how well the probability distribution output by Softmax matches the one-hot-encoded ground-truth label of the data.
It uses the log to penalize wrong predictions with high confidence stronger.
The cross-entropy loss function comes right after the Softmax layer, and it takes in the input from the Softmax function output and the true label.
Interpretation of Cross-Entropy values:
Hello, thank you for using the code provided by Hasty. Please note that some code blocks might not be 100% complete and ready to be run as is. This is done intentionally as we focus on implementing only the most challenging parts that might be tough to pick up from scratch. View our code block as a LEGO block - you can’t use it as a standalone solution, but you can take it and add to your system to complement it. If you have questions about using the tool, please get in touch with us to get direct help from the Hasty team.
# importing the library
import torch
import torch.nn as nn
# Cross-Entropy Loss
input = torch.randn(3, 5, requires_grad=True)
target = torch.empty(3, dtype=torch.long).random_(5)
cross_entropy_loss = nn.CrossEntropyLoss()
output = cross_entropy_loss(input, target)
output.backward()
print('input: ', input)
print('target: ', target)
print('output: ', output)
Hasty is a unified agile ML platform for your entire Vision AI pipeline — with minimal integration effort for you.
Start for free Request a demo