I modified the basic autograd example from the pytorch tutorial (found here: https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html#sphx-glr-beginner-blitz-autograd-tutorial-py) and I now get a weird error where the calculated gradient is None. Code is as follows:
if torch.cuda.is_available(): device=torch.device("cuda:1") x=torch.arange(4,device=device,dtype=torch.float32,requires_grad=True).view(2,2) print(x) y=x+2 z=y*y*3 out=z.mean() out.backward() print(x.grad) print(x.requires_grad)
tensor([[0., 1.], [2., 3.]], device='cuda:1', grad_fn=<ViewBackward>) None True
All I changed from the tutorial is x. What happened here? Also, slightly OT, but what is the tensor I’m able to pass to the backward function? It’s really not well explained in the tutorial.