Gradients negative though ReLU

I’m trying to implement Gradient Activation mapping for segmentation problems. As a basic segmentation model, I use the U-Net (Unet_github).

To extract the activations i hooked one of the down layers (Conv->BN->ReLU)*2. In order to get the gradients of the summed outputs of the network wrt to the down layer i hooked:

torch.autograd.grad(outputs=sum(out[:,target_class, spatial_pos]), inputs=hooked_activations)[0].cpu().data.numpy()

I’m wondering why the gradients turned out to have negative values. I checked the hooked activations and they were >=0. As i understood, the gradients should be positive as well. I changed the ReLUs within the “Down” Blocks of the Unet to inplace=False, as I red that inplace operations are dismissed by autograd.grad.

Im happy for any help!

The backpropagation computes grad_in = grad_out @ grad_relu by the chain rule. As things are pointwise with relu and the derivative grad_relu is 0 for x < 0 and 1 for x > 0, you find that grad_in is a “masked” version of grad_out.

If you wanted to be more selective about what sources to believe on the internet, this might be a bit of information to consider dubious. There is some delicacy about autograd and inplace and I could spend hours talking about that and how avoiding inplace is a good default policy, but I do not believe that autograd.grad might be dismissing operations.

Best regards

Thomas

Hi Thomas,

thx for the clarification, was very helpful!