Custom PyTorch functions for Tensors AND Variables

How would I implement a custom PyTorch function that can take either Tensors or Variables?
My first approach was something like this:

def _custom(a, b):  # operates on tensors
    pass  # implementation

class _Custom(Function):  # operates on variables
    def forward(self, a, b):
        return _custom(a, b)
  
    def backward(self, grad_output):
        pass # implementation

def custom(a, b):  # works on tensors AND variables
    if torch.is_tensor(a):
        return _custom(a, b)
    else:
        return _Custom()(a, b)

This works, but still produces a great amount of boilerplate. This doesn’t seem like the way to go.
In the tools/autograd folder in the PyTorch repository, I found the files derivatives.yaml and gen_variable_type.py, which seem to autogenerate PyTorch functions that can work on either Tensors and Variables. Can you give me a hint on how to adapt this mechanism to my own project?

Unfortunately, this is a bit complicated at the moment, even for developing PyTorch repo. The two files you linked are only used to generate methods for Variables. The Tensor methods are generated elsewhere. We are in the progress of merging Variable and Tensor, so in future, only one method will suffice.

As far as I understood PyTorch team has already merged Tensor and Variable in master branch? :slight_smile: It’s pretty cool and we can wait new release with it, right?

Yes that is correct :slight_smile:

Nice! When will you release it? Sorry for off topic

We still have some tasks to do before we can make a release. Some bugs and perf regressions are being fixed. And as you can see, many of our docs need to be updated, and example/tutorial code also needs to be updated. We are trying our best to make a release happen asap. :slight_smile: Thanks for your patience.

1 Like