Understanding why your model makes a specific prediction might be beneficial. The Hasty Explainability feature helps you visualize the results of an experiment by using saliency maps.
Explainability might help you get insights on the questions like:
To understand how the feature works, let’s observe the basic concept of explainability first.
Neural networks and deep learning brought a massive advancement to the Artificial Intelligence domain. Model predictions got more precise, and ML tasks - more complex. However, it became harder and harder for humans to understand the reasoning behind the model’s decisions. This phenomenon is known as a black-box problem - when only the input and the output of the model are available, with the rest of the algorithm being obscure.
The black-box paradigm might be problematic for several reasons:
That is where the Explainability concept steps in. It stands for creating a transparent AI which is comprehensible to humans and is available for a thorough evaluation.
In computer vision, saliency maps are one of the methods used to interpret what neural networks see.
Consider this image. For you, it is obvious that there is a dog in the picture. However, neural networks require to be trained on hundreds and thousands of images before they can recognize a dog, and even after that, they might make wrong predictions in edge-cases, like this one:
Even when the prediction is made correctly, you might wonder what exactly makes the model think it is a dog: is it the nose, the ears, or the proportions of the head?
Saliency maps help researchers understand the importance of different features in the model’s “eyes” by visualizing the images in the form of heatmaps or grayscale images, depending on the method. The hottest (most red) regions (in the case of the heatmap) or the brightest pixels (in the case of the grayscale image) point out the areas of an image which had a significant influence on the model’s prediction.
In Hasty, we use two models to create saliency maps:
With saliency maps, you can analyze the image regions or features that stood out across the whole dataset. These insights might help you understand whether your model picks up on the key features and retrain it if some biases are present.
For example, in the case above (the classification of doctors and nurses), the biased model made predictions based mainly on a person’s face and hairstyle. This led to female doctors being misclassified as nurses. In contrast, the unbiased model considered the white coat and the stethoscope as more important features and, therefore, made more correct predictions.
In the case below, the model recognized cats worse when they were in a cage.
Once we have this insight, we can add more images with cats behind the cage into our training set. Additionally, we can augment the images to increase the efficiency of training.
1. Open Model Playground;
2. Select the split and a completed experiment;
3. Next to current training results, you will see the Model insights tab;
4. Within the Model insights tab, create an Explainability run. You should:
5. To access saliency maps produced by your run, please navigate to the Insights tab of the Explainability widget.
When the run is completed, you will see the results listed as images. You will have the possibility to toggle the saliency map on and off.
Explainability is an essential concept in AI that helps understand the decisions made by the models and build confidence in their predictions.
You might benefit from the Explainability feature in Hasty if you want to:
Thanks for reading, and happy training!
Hasty is a unified agile ML platform for your entire Vision AI pipeline — with minimal integration effort for you.
Start for free Check out our services