Custom gradient calculation for tensor-like type

I am attempting to create a tensor-like class. I’ve been following the instructions at extending torch with a Tensor-like type. I have questions especially pertaining to gradient storage and calculation:

  1. I want to initialize my class from a (float) tensor, and be able to convert it back. I know I can retrieve the data using the numpy() function, but how do I get gradient data if I wish to store that too? When I convert back to tensor, how can I give it back the stored gradient data?
  2. I saw the instructions for making functions like torch.add work with my custom type, but I will also need to modify how the gradient is calculated by autograd. How do I define both the custom forward and backward versions of torch.add? I saw the instructions for extending torch.autograd with a custom function but I am not sure if that feature can be brought over to this case.

Any help is appreciated. Thank you!

Hi, I am facing the same problem. Did you find out how to solve this?
I also tried Subclassing torch.Tensor, but the problem persists when trying to get gradients, for example with torch.nn.Parameter(), since it inherits directly from torch.Tensor.

Hi,

I could not get it to work through PyTorch. I actually ended up writing my own PyTorch-like framework that was sufficient for my needs. You can find it on https://github.com/KPJoshi/Fixed-Point-RNN-Training. In particular, take a look at fxpTensor.py. You can try adapting and/or extending this as you wish, and feel free to ask me if you have any questions.