Community, I got a problem.
Suppose the customized loss function has parameter
Loss = f(outputs, targets, x)
Now I use
torch.autograd.grad to compute the gradient w.r.t. to model parameters (not
The problem is that I cannot manually update the model parameters such that
x is contained in the graph and still backpropagable.
If I do
param = param - lr * grad OR
tmp = param - lr*grad; setattr(model, name, tmp), the error is
cannot assign 'torch.cuda.FloatTensor' as parameter
If I convert Tensor to nn.Parameter OR using
param.data = ..., then
x is not in graph.
step() is the same as updating