Custom torch.autograd.Function backward()

In this example, PyTorch: Defining New autograd Functions — PyTorch Tutorials 2.1.0+cu121 documentation, specifically in backward(), the gradient is being manually computed and hardcoded, is there a way to make autodiff do the work instead of the manual definition in backward()? I’m trying to extend PyTorch with a custom CUDA lib and make PyTorch aware of it by wrapping a custom CUDA API call within the forward() portion of the new Function, however, I do not wish to manually derive the function defined in forward() which autodiff should be able to handle but it just happens it’s being wrapped inside a new Function() it has to be defined in backward()?

No, you would need to define the backward once you “leave” PyTorch and use a 3rd party library or custom CUDA/C++ code. To allow PyTorch’s Autograd to derive the backward use torch operations.