Hi, I want to change part of intermediate layer’s activation to zero during forward pass,
suppose my forward function is like this:
def forward(self, x): x = self.l1(x) x = F.relu(x) x[:10,:] = 0 def x_hook(grad): grad_clone = grad.clone() grad_clone[:10,:]= 0 return grad_clone x.register_hook(x_hook) output = F.log_softmax(x,dim=1) return output
this runs well inside train function, however inside my test function
def test(model, device, test_loader): model.eval() ... with torch.no_grad(): for data, target in test_loader: ... output = model(data) ...
However, while running the code inside test model, it gives out an error
cannot register a hook on a tensor that doesn’t require gradient
My question is：
(1)Can we put register_hook of torch.tensor inside forward function?
(2)Why register_hook function called inside torch.no_grad() still calculate gradients here?
Any suggestion would be appreciated!