Compute model gradients correctly?

I want to compute the gradients of a model with respect to certain inputs. This is how I think it should be, but PyTorch throws an error unless I include retain_graph=True, which I feel shouldn’t be the case? This is how I do it right now:

for x, y in loader:
  logits = model(x)
  loss = ce_loss(preds, y)

  grads = autograd.grad(loss, model.parameters()) # Throws an error unless I include retain_graph=True

What’s the right way of doing this (without using loss.backward(), since I will be performing similar operations above for input-optimization as well, so I really need to know how to do it using autograd)?