Confused behavior of autograd

I try to implement a higher order gradient to optimizing parameters of networks, i.e.,

var_input = autograd.Variable(input_data, requires_grad=True)
loss = model(var_input)
gradient = autograd.grad(loss, input=var_input, create_graph=True, retain_graph=True, only_inputs=True)[0]
train_loss = loss_fn(gradient)
gradients = autograd.grad(train_loss, input=model.parameters(), create_graph=False, retrain_graph=False, only_inputs=True)

However, I got an error One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.

Could you tell me how to checkout which one tensor is not in the graph?

This will happen if the loss is linear in a parameter and thus the first gradient does not depend on it.
The quickest way to find which one is to use allow_unused=True in the grad call and then check the gradients it returns.

Best regards

Thomas

Thanks for your timely response.
What is the meaning of " the loss is linear in a parameter and thus the first gradient does not depend on it"?
Do you mean that, assuming y = 2x, d(dy/dx)/dx = 0? But why does PyTorch just set the gradient to 0?

If you have this for the first derivative, it likely is a bug, only with higher order derivatives this is benign.