I get this error ValueError: can’t optimize a non-leaf Tensor. My variable is leaf (torch.Tensor([1.]).requires_grad_().to(device)) however when I pass it in optimizer I get this error.

Hi,

`.to`

is a differentiable operation and hence is recorded by autograd which makes your tensor as non-leaf.

Please see if this helps:

```
import torch
a = torch.tensor([1.0], device='cuda', requires_grad=True)
print(a.is_leaf) # True - you might want to use this approach
b = torch.Tensor([1.]).requires_grad_().to('cuda')
print(b.is_leaf) # False
```

Thanks for the reply, but again I get the same error.

The problem is when

torch.optim.SGD(list(a), lr=0.1). Here I get the error

ValueError: can’t optimize a non-leaf Tensor

If I set

a = torch.Tensor([1.]).requires_grad_()

then I can pass it through the optimizer but I can not optimize the “a”

variable

Use this instead:

```
opt = torch.optim.SGD([a], lr=0.1)
```

The reason why `list(a)`

doesn’t work but `[a]`

does:

```
print(list(a)) # [tensor(1., device='cuda:0', grad_fn=<UnbindBackward0>)]
print([a]) # [tensor([1.], device='cuda:0', requires_grad=True)]
```

In the first case, autograd was able to track the operation which is evident through the grad_fn attribute of `a`

and so `a`

doesn’t remain a leaf anymore.

1 Like

Great it is working now!

Thank you so much!