Gradient-based Model Explainability Technique, GradCAM, for Object Detection Models

What are you looking at?
What are you looking at?

Image Analysis is one of the most demanding tools to achieve automation in various sectors. Image Classification, Captioning, Detection, Segmentation, etc., are very crucial in various demanding technologies like self-driving cars. These image analysis tools are very crucial in today's HITECH world.

Inside Image analysis software contains algorithms that execute machine learning (ML) and deep learning (DL) models. Unlike ML models, understanding a Deep Learning model is very tough. It provides the best results but fails at explaining its decisions. All these are complex models with zero interpretability, called black-box models.

credits: https://in.pinterest.com/pin/433682639103076991/

Here comes the need for explaining these models to understand the reasons behind a particular output. In this article, one can find the list of reasons why explainability is required for AI. Some XAI techniques require the knowledge of model architecture (white box techniques) while some do not (black box techniques). For more info about these two types of explainability techniques, refer to this article.

In this article, let us learn how the gradient-based GradCAM technique can be applied to the object detection model, YOLOv3 trained on PASCAL-VOC classes.

GradCAM (Gradient weighted Class Activation Map)

GradCAM is a Local Explainability Technique that generates visual heatmaps for an input image. Based on the gradients generated by the last convolution layer of the model, the heatmaps highlight the corresponding pixels which are responsible for a particular prediction. To achieve this, the model architecture is required for assessing gradients which makes GradCAM a white-box explainability technique.

Let's dive into the CODE and simultaneously try to understand it in detail.

1) Import necessary packages to load and process input data.

# import necessary packages

import numpy as np
import matplotlib.pyplot as plt
from PIL import Image

# tensorflow packages
import tensorflow as tf
from tensorflow.keras.models import load_model
import tensorflow.keras.backend as K
from tensorflow.keras.preprocessing.image import img_to_array, load_img

2) For the YOLOv3 model, let's follow this GitHub repository. The YOLOv3+MobileNetv2 model architecture is taken and trained on PASCAL-VOC classes. YOLOv3 is a Fully Convolutional Neural Network and the input image size is 416X416. More details on the YOLO model architectures and training can be studied from here.

In our context, one thing we need to know about the YOLOv3 model is that it makes detections at 3 different scales (i.e., 13x, 26x & 52x). As discussed earlier, to implement GradCAM (white-box technique), one requires the gradients of the last convolution layer.

Let's look at the model.summary() to get our required layers for GradCAM implementation.

model = load_model("model_path.h5")

model.summary()

conv_pw_pred_1_3_leaky_relu (Le (None, 13, 13, 1024) 0           conv_pw_pred_1_3_bn[0][0]        

conv_pw_pred_2_3_leaky_relu (Le (None, 26, 26, 512)  0           conv_pw_pred_2_3_bn[0][0]        

conv_pw_pred_3_3_leaky_relu (Le (None, 52, 52, 256)  0           conv_pw_pred_3_3_bn[0][0]        

predict_conv_1 (Conv2D)         (None, 13, 13, 75)   76875       conv_pw_pred_1_3_leaky_relu[0][0]

predict_conv_2 (Conv2D)         (None, 26, 26, 75)   38475       conv_pw_pred_2_3_leaky_relu[0][0]

predict_conv_3 (Conv2D)         (None, 52, 52, 75)   19275       conv_pw_pred_3_3_leaky_relu[0][0]

I have just attached only the last layers from which the 3 scales are clearly evident (i.e., layers predict_conv_1, predict_conv_2, predict_conv_3).

3) For GradCAM algorithm, we build a gradcam model. Next, create a gradient tape to generate and store the gradients. More about tf.GradientTape() here.

for j, layer in enumerate(conv_layer):
          gradModel = Model(inputs=[model.inputs], outputs=[model.get_layer(layer).output,model.outputs])
          with tf.GradientTape() as tape:
              res_t = cv2.resize(np.asarray(image), model_image_size, interpolation = cv2.INTER_AREA) # input image
              res_t = tf.expand_dims(res_t,axis=0)
              inputs = tf.cast(res_t, tf.float32)
              tape.watch(inputs)
              (convOutputs, predictions) = gradModel(inputs)
          
          grads = tape.gradient(predictions[j], convOutputs) # create gradients
          castConvOutputs = tf.cast(convOutputs > 0, "float32")
          castGrads = tf.cast(grads > 0, "float32")
          guidedGrads = castConvOutputs * castGrads * grads
          convOutputs = convOutputs[0]
          guidedGrads = guidedGrads[0]

After the gradients are calculated, the following operations create a heatmap that is resized to fit onto the input image.

          # compute the average of the gradient values
          weights = tf.reduce_mean(guidedGrads, axis=(0, 1))
          cam = tf.reduce_sum(tf.multiply(weights, convOutputs), axis=-1)

          # resize the output heatmap to match the input image dimensions
          (w, h) = (np.array(image).shape[1], np.array(image).shape[0])
          heatmap = cv2.resize(cam.numpy(), (w, h))
          # normalize the heatmap
          # scale to the range [0, 255], and
          # then convert to an unsigned 8-bit integer
          numerator = heatmap - np.min(heatmap)
          denominator = (heatmap.max() - heatmap.min()) + eps
          heatmap = numerator / denominator
          heatmap = (heatmap * 255).astype("uint8")

Now let's look at how the visualizations appear in 3 scales.

As evident from the figure above, as the heatmap scale increases, critical features become more specific i.e., 13x highlighting the object position (airplane, bird) to 52x concentrating on key differentiating factors (exhaust nozzle in airplane, tail & legs in the bird). This explains the model's behavior w.r.t. specific class prediction.

Summary

Thus, the GradCAM heatmaps highlight the features that are key for model decision-making. These explanations promote model interpretability and give us key insights to improve the image analysis software.

References

  1. https://pyimagesearch.com/2020/03/09/grad-cam-visualize-class-activation-maps-with-keras-tensorflow-and-deep-learning/#download-the-code
  2. https://towardsdatascience.com/grad-cam-camera-for-your-models-decision-1ef69aae8fe7

Do Checkout

One such product which can do end-to-end testing which involves Bias, Explainability, Adversarial Attacks, Performance Testing, and Data Generation is AIEnsured.