I’m very new to pytorch and this might come as a stupid question; while going through the autograd tutorial I came to this section where it’s written-
Now in this case
yis no longer a scalar.
torch.autogradcould not compute the full Jacobian directly, but if we just want the vector-Jacobian product, simply pass the vector to
backwardas an argument:
v = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float) y.backward(v) print(x.grad)
Now, I don’t understand why are we using a different tensor
v in order to compute gradients for