Subclassing tensors

Numpy has an in-depth guide on subclassing ndarray, allowing arrays to be associated with additional attributes and behaviors while still maintaining the core ndarray functionality:

Does any similar feature exist for PyTorch tensors? I’ve tried several things, but unfortunately not been able to get it working. I found a similar discussion from a year ago but no good answer was provided.

1 Like

I gave it a shot over and above what was discussed in the older discussion that you have pasted


import torch
from torch import nn
from torch.autograd import Variable
import torch.nn.functional as F
class MyObject(torch.Tensor): 
    def __new__(cls, x, extra_data, *args, **kwargs): 
        return super().__new__(cls, x, *args, **kwargs) 
    def __init__(self, x, extra_data): 
        self.extra_data = extra_data

    def clone(self, *args, **kwargs): 
        return MyObject(super().clone(*args, **kwargs), self.extra_data)

    def to(self, *args, **kwargs):
        new_obj = MyObject([], self.extra_data)
        tempTensor=super().to(*args, **kwargs)
obj1 = MyObject([1, 2, 3], 'extra_data_123')
obj2 ='cuda')
t1 = torch.Tensor([1, 2, 3])
t2 ='cuda')

Hope this helps

This almost works, but it fails to capture one of the key benefits of the numpy subclassing: various operations, most notably slicing and basic operations like addition, preserve the added information. I’d like to be able to run: print(obj1[:2].extra_data).

Furthermore, is there any way to subclass boolean, integer, or other types of tensors? Currently, using them as a base class throws an error. But I can find no way to modify or set the type of a tensor subclass; it’s stuck on float32.

Is this still the case?

@tyoc213 see here: TypeError: expected CPU (got CUDA) when sublcassing / inheriting torch.Tensor class - #5 by turian

1 Like