TypeError: expected CPU (got CUDA) when sublcassing / inheriting torch.Tensor class

The code below results in an error when it’s run with a cuda tensor input. How do I resolve the error?

import torch

class TestTensor(torch.Tensor):
    def test(self):
          print('test')

x = torch.ones(1,3,4,4).cuda()

x_out = TestTensor(x)

Throws the following error:

Traceback (most recent call last):
  File "test_tensor.py", line 8, in <module>
    x_out = TestTensor(x)
TypeError: expected CPU (got CUDA)
2 Likes

Hi,

I am not sure if this is the expected behavior…
Could you open an issue on github please so that we can track this?

Also, if possible, make sure you can reproduce this with the latest nightly build because this feature is in active development and it might have been fixed.

Hi,

I just tried the nightly build and this is still an issue for me. Did a github issue get made for this?

Here is the issue. We have the same issue.

What is the fix we should apply?

import torch

class Signal(torch.Tensor):
    @property
    def batch_size(self):
        assert self.ndim == 2
        return self.shape[0]

    @property
    def num_samples(self):
        assert self.ndim == 2
        return self.shape[1]

N = 88200
batch_size = 64
pitch = Signal(torch.zeros(batch_size, N, device='cuda'))
TypeError: expected CPU (got CUDA)

What should we be doing instead?

Solution, according to the issue:

1 Like