In the official tutorial, I see that the shape of param that was put in backward is the same as input like below
gradients = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float) y.backward(gradients) print(x.grad)
But in another page,
net.zero_grad() out.backward(torch.randn(1, 10))
The param put in backward share the shape of output, can anyone give my any help?