Hi,
I would like to check with you whether what Iām facing is a bug or a feature
The following code returns an error
x = tensor([0.], requires_grad=True).to('cuda')
optimizer = torch.optim.Adam([x])
> ValueError: can't optimize a non-leaf Tensor
while the following works fine
x = tensor([0.], requires_grad=True, device='cuda')
optimizer = torch.optim.Adam([x])
Is this by design or is it a bug?