Dear all,
When using the torch::autograd::grad function and setting create_graph = True and retain_graph = True, I get a resulting gradient with require_gradient() = False, which should not be the case. The excerpt from my code looks like this (it is part of a bigger function with the input being a const torch::Tensor& input
which is batched tensor of inputs and I loop over its elements with i
, and m_function
is just a function from a torch::Tensor to a torch::Tensor)
torch::Tensor tmp_in = input.index({ i });
torch::Tensor tmp_out = m_function(tmp_in).reshape({ -1 });
torch::Tensor tmp_grad_outputs = torch::ones_like(tmp_out);
std::cout << "tmp_in.requires_grad() is: " << tmp_in.requires_grad() << "\n" << std::endl;
std::cout << "tmp_out.requires_grad() is: " << tmp_out.requires_grad() << "\n" << std::endl;
torch::Tensor tmp_grad = torch::autograd::grad(
/*output=*/{ tmp_out },
/*input=*/{ tmp_in },
/*grad_outputs=*/{ tmp_grad_outputs },
/*retain_graph=*/true,
/*create_graph=*/true)[0];
std::cout << "tmp_grad.requires_grad() is: " << tmp_grad.requires_grad() << "\n" << std::endl;
The final line prints false, while comparing to similar pytorch code, one should expect true. With it being false, it also crashes (understandably) when trying to compute a higher order derivative. Has anybody encountered this issue before?
Kind regards,
Robin