Getting this warning; Output 0 of BackwardHookFunctionBackward is a view and is being modified inplace

I have checked the post here , trying to fix the warning but I am still getting the warning.
Custom Autograd Function Backward pass not Called

/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1204: UserWarning: Output 0 of BackwardHookFunctionBackward is a view and is being modified inplace. This view was created inside a custom Function (or because an input was returned as-is) and the autograd logic to handle view+inplace would override the custom backward associated with the custom Function, leading to incorrect gradients. This behavior is deprecated and will be forbidden starting version 1.6. You can remove this warning by cloning the output of the custom Function. (Triggered internally at /pytorch/torch/csrc/autograd/variable.cpp:547.) result = torch.relu_(input)

Here is my code

class _Grad():
  def __init__(self,model,target_layer=None,input_size = [3,224,224]):
    
    if not isinstance(model,torch.nn.Module):
      raise ValueError("Provide a valid model")
    self.model = model
    self.model_dic = dict(model.named_modules())
    
    if target_layer is None or target_layer not in self.model_dic.keys():
      raise ValueError("Provide a valid layer")
    self.target_layer = self.model_dic[target_layer]
    
    self.activations = None
    self.grads = None
    self.hooks = []
    back_hook = 'register_full_backward_hook' if torch.__version__ >= '1.8.0' else 'register_backward_hook'
    self.hooks.append(self.target_layer.register_forward_hook(self._extract_activations))
    self.hooks.append(getattr(self.target_layer,back_hook)(self._extract_grads))

  def _extract_activations(self,module,input,output):
      self.activations = output.data

  def _extract_grads(self,module,input,output):
      self.grads = output[0].data

  def _backpropagate(self,class_indx,scores):
    if self.activations is None:
      raise TypeError("Input needs to be passed before Backpropagation")
    loss = scores[:,class_indx].sum()
    self.model.zero_grad()
    loss.backward(retain_graph=True)

  def _get_weights(self,class_indx,scores):
    self._backpropagate(class_indx,scores)
    b,c,h,w = self.grads.size()
    weights = self.grads.view(b, c, -1).mean(2)
    weights = weights.view(b,c,1,1)
    return weights.clone()

  def get_cam_map(self,class_indx,scores,normalized):
    weights = self._get_weights(class_indx,scores)
    
    cams = torch.nansum((weights*self.activations).squeeze(0),0)
    cams = F.relu(cams)
    if normalized :
      cam_map_min, cam_map_max = cams.min(), cams.max()
      cams = (cams - cam_map_min).div(cam_map_max)
    return cams

  def __call__(self,class_indx,scores,normalized=True):
    return self.get_cam_map(class_indx,scores,normalized)
    
from torchvision.models import densenet121
model = densenet121(pretrained=True).eval()
img = read_image("image.png")
input_tensor = normalize(resize(img, (224, 224)) / 255., [0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
out = model(input_tensor.unsqueeze(0))

@ptrblck , I don’t want to tag you personally but I am not able to resolve it. Can you please help.

can you provide read_image? I am trying to reproduce it. Is that returning numpy or Torch.Tensor? what shape?

Their is no constraint on image size while reading. I am using torchvision for reading so output will be a tensor

from torchvision.io.image import read_image 

As you can see here, I was not able to reproduce the exact same error your are describing.

How are you exactly using _Grad?

Also, what version of torch you are using?

Also, It is not clear what you are trying to do. The more details the better on what you are trying to do.

I missed the line causing the bug. Check it Link

from torchvision.models import densenet121
model = densenet121(pretrained=True).eval()
img = read_image("image.png")
input_tensor = normalize(resize(img, (224, 224)) / 255., [0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
extract = _Grad(model=model,target_layer="features")
out = model(input_tensor.unsqueeze(0))

I am implementing the Grad-cam Link for visual explanations of classification models. Given the target-layer, I want extract the activations during forward pass and gradients during backward pass for it.
torch version: 1.8.1+cu101

problem is with the inplace in the activation function of your model. You need to disable it.

1 Like

Also, you don’t want to use register_backward_hook as its behaviour is broken (see this issue here)

Thanks @AlphaBetaGamma96 @kevingoh. Resolved.

1 Like

Hello Kevin

I tried the following:

> for i, (name, layer) in enumerate(model.named_modules()):
>     if isinstance(layer, nn.ReLU):
>         layer = nn.ReLU(inplace=False)

but that couldn’t disable those “in place” RELU features.

Would you please give some advice? Is it necessary to redefine model and retrain?

Thanks

Can you share your model code too?

It’s the standard resnet34 with every RELU “in place”.

So, you share a minimal reproducible example of it so I can debug your problem?