Negative gradients when calculating GradCAM heatmap

I have a Segmentation network model trained for 2 classes and am able to see accurate results. But when using grad-cam for the heatmap, I am able to see good results for the last convolution layer for both the classes but have issues when trying to generate a heatmap for the second last convolution layer for one of the classes (the other class’s heatmap is working fine).

**Last 5 layers** convolution_layer(filters:8, kernel:3*3) convolution_transpose_layer(filters:2, kernel:2*2) convolution_layer(filters:2, kernel:3*3) convolution_layer(filters:10, kernel:1*1) activation_layer(softmax) 

The heatmap is empty because of all negative pooled gradients(due to mean from all the -ve gradients wrt Conv layer), resulting in negative values in pooled_grads*convolution_output on which relu is applied, giving all zeros.

What does it mean for GradCAM to be all negative?

Why is it that all channels in the convolution lead to a “negative” contribution to the true output class? following this paper for heatmap for segmentation models.

submitted by /u/Ash_real
[visit reddit] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *