This code works perfectly and I’m able to retrieve the gradients saved in self.grads after calling .backward(), but it prints the following warning:
UserWarning:
Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions.
This hook will be missing some grad_input.
Please use register_full_backward_hook to get the documented behavior.
warnings.warn("Using a non-full backward hook when the forward contains multiple autograd Nodes "
However, when I replace register_backward_hook by register_full_backward_hook in __init__, the gradients are not saved, i.e., self.grads is None
The following warning is printed when I use register_full_backward_hook and it’s not printed when using register_backward_hook.
I’m using the pretrained SqueezeNet model from torchvision (torchvision.models.squeezenet1_1(pretrained = True))
UserWarning: Output 0 of BackwardHookFunctionBackward is a view and is being modified inplace.
This view was created inside a custom Function (or because an input was returned as-is)
and the autograd logic to handle view+inplace would override the custom backward associated
with the custom Function, leading to incorrect gradients.
This behavior is deprecated and will be forbidden starting version 1.6.
You can remove this warning by cloning the output of the custom Function.
(Triggered internally at /pytorch/torch/csrc/autograd/variable.cpp:547.)
result = torch.relu_(input)
Thisstackoverflow question seems to detail the same problem as well.
I still cannot reproduce this issue using my code snippet and by replacing models.alexnet() with models.SqueezeNet(). Both hooks return a valid grad value and the first approach will raise the expected deprecation warning.
Were you able to run into the error using my code snippet?
In that case you might need to update to the latest nightly release or build PyTorch from source, as it seems that a newer version might have fixed this issue.