How can I compute the backward gradients for each tensor in model parameters (p for p in model.parameters())? Notice that I don’t want to use module.backward_hook(), instead, I want to be able to put the hooks on tensors of parameters using tensor.register_hook().
Specifically, in the snippet below, I want to know what I should put in hookfn to have corresponding gradients for each parameter.
model = torchvision.models.resnet18().cuda()`
class HookTensor():
def __init__(self, tensor, backward=False):
self.hook = tensor.register_hook(self.hook_fn)
def hook_fn(self, tensor):
self.tensor = tensor
pass
def close(self):
self.hook.remove()
hookB = [HookTensor(p) for p in modelF.parameters() if p.requires_grad]
inputs, target = next(iter(train_loader))
inputs = inputs.cuda()
target = target.cuda()
outputs = model(inputs)
loss = criterion(outputs, target)
loss.backward()
Params = [p for p in model.parameters() if p.requires_grad]
Thank you for your response.
This hook_fn is equivalent to the backward hook on modules that gives the gradients. If I want to get the forward hooks what would be the correct way of defining hook_fn?
There is no forward hook for tensors. Because by the time you have the tensor to be able to hook it, you just computed it’s value, so you have access to it directly, no need for hook
I understand, but I want to avoid outputting a lot of intermediate activities from my forward function. Just like when we use the register_forward_hook on Modules, I wonder if there is an equivalent way to define resigter_hook on the parameters of a model.