Please see the comment here: ValueError: can't optimize a non-leaf Tensor?. The main problem you are running into is that
auto a = torch::ones({2, 2}, torch::TensorOptions().dtype(torch::kFloat).requires_grad(true)).cuda();
is NOT a leaf node, so autograd does not accumulate gradients into a.grad
. This is because you’re calling .cuda()
on the actual leaf node, returning a new tensor. If you directly construct the tensor on CUDA, then you can get gradients on it:
auto a = torch::ones({2, 2}, torch::TensorOptions().dtype(torch::kFloat).requires_grad(true).device(torch::kCUDA));
auto c = (a + a).sum();
c.backward()
a.grad() // now has something