When I implement the backward on backward
I find the following code doesn’t work.
import torch
from torch import nn
x = torch.ones(1024, requires_grad=True)
y = x+1
print(y.grad_fn) #y has backward information
param = nn.Paramater(y)
print(param.grad_fn) #param does not have backward information now!
net.set_param(param)
output = net(image)
loss = criterion(output, label)
d_loss_dx = torch.autograd.grad(loss, x, only_inputs=True)[0]
when it’s here
d_loss_dx = torch.autograd.grad(loss, x, only_inputs=True)[0]
It will cause an error:
RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.
How can I retain the backward information in y?
I tried :
param._grad_fn = y._grad_fn
It will cause an error:
RuntimeError: _grad_fn can be only set to None