Utilizing autograd in C++ extensions for custom operator

Hi, everyone
Is there any way to use autograd mechanism in C++ extension?
Specifically, I am going to write a custom operator in C++, which might contain some off-the-shelf functions, for example, at::softmax . I do not want to manually re-implement the backward process of softmax.
I describe my requirement in the following pseudo-code:

at::Tensor forward(at::Tensor input){
    auto a = custom_func(input);
    auto b = at::softmax(a); 
    save_for_backward_somehow(a, b);
    return b;
}

at::Tensor backward(at::Tensor out_grad){
    auto a, b = saved_tensor();
    b.backward(out_grad); // avoid re-implementing the backward of softmax
    auto grad_of_a = a.grad; // after calling backward on b, we can get the gradient of a somehow.
    return get_input_grad(grad_of_a);
}

I will appreciate any comments and suggestions.

Yes, libtorch supports Autograd and you might find some examples in this test file.

You can use torch::autograd::Function exactly the same way you’d use torch.autograd.Function in Python. So you could first go to the Python documentation on torch.autograd.Function and then navigate in your C++ IDE to where torch::autograd::Function is defined and look at the example that is given in the comments. That’s how I got very quickly the hang of it.