How could I swtith off gradient with c++/libtorch

Hi,

When I use python api, I can do that by torch.set_grad_enabled(False) which switches off the graident computation globally. How could I do that with libtorch/c++ api?

The RAII torch::AutoGradMode enable_grad(true/false); should work as described in the docs.

auto x = torch::tensor({1.}, torch::requires_grad());
{
  torch::AutoGradMode enable_grad(true);
  auto y = x * 2;
  std::cout << y.requires_grad() << std::endl; // prints `true`
}
{
  torch::AutoGradMode enable_grad(false);
  auto y = x * 2;
  std::cout << y.requires_grad() << std::endl; // prints `false`
}
1 Like

Thanks for telling this !!
By the way, if I would like to use libtorch for inference, could I use other backends apart from default pytorch engine such as tvm ? If so, how could I do this please ?

I don’t know enough about TVM, but @tom, as a TMV developer, might know more about it. :slight_smile:

You can use dlpack to get from and to TVM with your tensors:

def tensor_to_tvm(t):
    return tvm.nd.from_dlpack(torch.utils.dlpack.to_dlpack(t))
def tensor_from_tvm(a):
    return(torch.utils.dlpack.from_dlpack(a.to_dlpack()))

This allows to feed PyTorch tensors to TVM models and recover the results.
If you want to convert part of your PyTorch to TVM, you can use the JIT and the TVM PyTorch frontend.

I have an in-depth example that does funny tricks to also enable training in the Transformers, PyTorch and TVM blog post.

In my courses, we also do converting to TVM and then running from C++ and this kind of stuff.

Best regards

Thomas