How to compute the grad of input with respect to loss

I have a puzzle about the torch autograd. When I build a model, and write code:

x.requires_grad = True
pred = model(x)
pred.sum().backward()
print(x.grad)

I can get the grad of x. I want to go forward from here. Is there a method that I can get d(x.sum())/d(pred) using the following code?

x.requires_grad = True
pred = model(x)
pred.requires_grad = True
pred.sum().backward(retain_graph=True)
x.sum().backward()
print(pred.grad)

I have tried this, but the pred.grad is None. and x.grad.grad_fn is None. Thanks for your help :smile:

If you replace pred.requires_grad=True with pred.retain_grad() it should work.

This might be helpful: pytorch - Why does autograd not produce gradient for intermediate variables? - Stack Overflow