I’m trying to create a cpp extension following this tutorial.
It seems that if we’re using torch::Tensor as input to a C++ function, and we’re only doing PyTorch operations on it, no custom backward function is necessary.
Yet the tutorial implements a backward pass even though, I believe, in the forward pass it uses torch::Tensor as input and uses only native PyTorch operations on it. So it seems we don’t need to implement the backward pass in this case but instead in lltm_backward
we can just call .backward() on the inputs – is it correct?