Why there exists no grad after tensor is reshaped?

x = torch.tensor([1., 2., 3., 5., 6., 7.], requires_grad=True)
xx = x.reshape((2, -1))
y = xx*xx
yy = torch.mean(y)
yy.backward()
print(x.grad, xx.grad, y.grad, yy.grad)

image

x: leaf of the graph
xx: intermediate result that does not automatically collect gradient at .grad