The code below results in an error when it’s run with a cuda tensor input. How do I resolve the error?
import torch
class TestTensor(torch.Tensor):
def test(self):
print('test')
x = torch.ones(1,3,4,4).cuda()
x_out = TestTensor(x)
Throws the following error:
Traceback (most recent call last):
File "test_tensor.py", line 8, in <module>
x_out = TestTensor(x)
TypeError: expected CPU (got CUDA)
2 Likes
albanD
(Alban D)
February 1, 2021, 2:41pm
2
Hi,
I am not sure if this is the expected behavior…
Could you open an issue on github please so that we can track this?
Also, if possible, make sure you can reproduce this with the latest nightly build because this feature is in active development and it might have been fixed.
jordie_s
(Jordie Shier)
March 7, 2021, 8:27pm
3
Hi,
I just tried the nightly build and this is still an issue for me. Did a github issue get made for this?
turian
March 7, 2021, 8:38pm
4
Here is the issue . We have the same issue.
What is the fix we should apply?
import torch
class Signal(torch.Tensor):
@property
def batch_size(self):
assert self.ndim == 2
return self.shape[0]
@property
def num_samples(self):
assert self.ndim == 2
return self.shape[1]
N = 88200
batch_size = 64
pitch = Signal(torch.zeros(batch_size, N, device='cuda'))
TypeError: expected CPU (got CUDA)
What should we be doing instead?
turian
March 7, 2021, 10:14pm
5
Solution, according to the issue:
turian:
import torch
class Signal(torch.Tensor):
@property
def batch_size(self):
assert self.ndim == 2
return self.shape[0]
@property
def num_samples(self):
assert self.ndim == 2
return self.shape[1]
N = 88200
batch_size = 64
pitch = torch.zeros(batch_size, N, device='cuda').as_subclass(Signal)
1 Like