A bug or a feature?

Hi,
I would like to check with you whether what I’m facing is a bug or a feature :slight_smile:

The following code returns an error

x = tensor([0.], requires_grad=True).to('cuda')
optimizer = torch.optim.Adam([x])
> ValueError: can't optimize a non-leaf Tensor

while the following works fine

x = tensor([0.], requires_grad=True, device='cuda')
optimizer = torch.optim.Adam([x])

Is this by design or is it a bug?

1 Like

This is expected. In your first code piece, the optimize-able tensor is tensor([0.], requires_grad=True), after you do .to('cuda') you get a new Tensor that is a computed result of the original tensor, i.e., an intermediate step.

2 Likes