Unclear about relation between TorchScript and backprop

Hi,
I am trying to write a TorchScript operator to replace a custom torch.autograd.Function to influence the gradient computation, since TorchScript cannot serialize calls to custom Functions.
Is there an equivalent of torch.autograd.Function in ATen or Torch?

Best regards
Leopold

@leowalkling, could you elaborate more on what you want to replace here with a example? In case it’s helpful, in JIT we currently put ops to 2 categories:

  1. Ops have autodiff formulas in autodiff.cpp: their backward graph is replaced by https://github.com/pytorch/pytorch/blob/0cb24098c74f8ebed81ec08b83bf6cb5ab3903f5/torch/csrc/jit/graph_executor.cpp#L76
  2. Ops don’t have autodiff formulas: their backwards are handled by eager mode autograd directly.
    Please let us know the context of problem so that we can help more on this.

Thanks,
Ailing

For reference, I’ve come up with a solution by imitating the generated kernels and the code generation
from /tools/autograd.
But it seems to me that it is a very intricate task, given that there is little documentation, so I made sure to follow the original code of builtin operations closely.

The purpose I had in mind was to perform in-place modification of a variable without incrementing invalidating its former gradient, but instead creating a new variable depending on the old one.
I’m using this in my implementations of IAF, MADE, etc. to save RAM (by using fewer clones of the same data).

The following is what wasn’t obvious from the docs (to me):

  1. Use ta::make_variable or ta::make_variable_view to create the output Variable_s
  2. Implement a new subclass of ta::TraceableFunction in your module.
    With Pytorch 1.0 from Conda, the headers lack generated code, such as the xxxBackward classes, so there was no good way to use even the existing subclasses of ta::TraceableFunction.
  3. Instantiate your custom xxxBackward and register it in the graph using the methods set_next_edges and add_input_metadata of TraceableFunction and set_gradient_edge of Variable. Helper functions available in “torch/csrc/autograd/functions/utils.h”
  4. TorchScript-compatibility requires registering the op as described in the tutorials on extending TorchScript.