Trainable parameter as input

I can’t understand the difference:

prior = Variable(torch.zeros(24), requires_grad=True)
prior_param = [nn.Parameter(]
h_test = torch.autograd.grad(outputs=prior.mean(), inputs=prior_param) #does not work
h_test = torch.autograd.grad(outputs=prior.mean(), inputs=prior) #works

I want to make a trainable parameter as an input to the computation graph.
The error is
RuntimeError: One of the differentiated Variables appears to not have been used in the graph

What version of pytorch are you using? I couldn’t repro this.

I am using 0.3.1.post2

I think it’s because you’re using the prior as the output. This works on 0.3.1:

prior = Variable(torch.zeros(24), requires_grad=True)
prior_param = nn.Parameter(
h_test = torch.autograd.grad(outputs=prior_param.mean(), inputs=[prior_param])
h_test = torch.autograd.grad(outputs=prior.mean(), inputs=prior)

I’m not sure why the code you posted works for 0.4 and not 0.3.1, but it’s probably something to do with the tensor variable merge.