Using Variable to update nn.Parameter and keep gradient

Hi,

Community, I got a problem.

Suppose the customized loss function has parameter x, e.g.,

Loss = f(outputs, targets, x)

Now I use torch.autograd.grad to compute the gradient w.r.t. to model parameters (not x)

The problem is that I cannot manually update the model parameters such that x is contained in the graph and still backpropagable.

For example

If I do param = param - lr * grad OR tmp = param - lr*grad; setattr(model, name, tmp), the error is cannot assign 'torch.cuda.FloatTensor' as parameter

If I convert Tensor to nn.Parameter OR using param.data = ..., then x is not in graph.
(and backward() + step() is the same as updating param.data)

Hi Zeeky
Since model parameters in PyTorch are all leaf variables, which means you can not just set the parameters to new values manually. To achieve this, you just need to wrap the update function in a torch.no_grad() block:

with torch.no_grad():
    param = param - lr * grad