Is gradient calculate for target/labels in loss?

I wonder if loss.backward will [unnecessarily] calculate the grad in respect to the constant target classification labels.
Should I wrap the target labels with a non-grad Variable? i.e.:

def loss(pred, target):
  target = Variable(target.clone(), requires_grad=False)
  return (target - pred).abs().sum()  # e.g. L1

Or do I actually want/need those grads?
How is the best way to test that? (code please)