LibTorch/C++ General grad function

In PyTorch, there exists a general autograd function:
torch.autograd.grad(input,output,...)
that doesn’t require in-place operations on tensors. Does a similar function or interface exist in libTorch? Currently, I can find most of the in-place operations but not this general one.

I have the same problem with you. I want to use

torch::autograd::backward() 

to instead. But I got the error log with

one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 64, 4, 4]] is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

I dont’t konw how to solve the problem. Do you Solve it? Thanks!

1 Like

I have encountered the same error. I have narrowed down the code causing the “inplace operation” to when I have applied the optimiser .step() function. The optimiser used was torch::optim::Adam.