How would I implement a custom PyTorch function that can take either Tensors or Variables?
My first approach was something like this:
def _custom(a, b): # operates on tensors pass # implementation class _Custom(Function): # operates on variables def forward(self, a, b): return _custom(a, b) def backward(self, grad_output): pass # implementation def custom(a, b): # works on tensors AND variables if torch.is_tensor(a): return _custom(a, b) else: return _Custom()(a, b)
This works, but still produces a great amount of boilerplate. This doesn’t seem like the way to go.
tools/autograd folder in the PyTorch repository, I found the files derivatives.yaml and gen_variable_type.py, which seem to autogenerate PyTorch functions that can work on either Tensors and Variables. Can you give me a hint on how to adapt this mechanism to my own project?