I am attempting to create a tensor-like class. I’ve been following the instructions at extending torch with a Tensor-like type. I have questions especially pertaining to gradient storage and calculation:
- I want to initialize my class from a (float) tensor, and be able to convert it back. I know I can retrieve the data using the
numpy()function, but how do I get gradient data if I wish to store that too? When I convert back to tensor, how can I give it back the stored gradient data?
- I saw the instructions for making functions like
torch.addwork with my custom type, but I will also need to modify how the gradient is calculated by autograd. How do I define both the custom forward and backward versions of
torch.add? I saw the instructions for extending torch.autograd with a custom function but I am not sure if that feature can be brought over to this case.
Any help is appreciated. Thank you!