I tried to wrapper autograd.Variable to send the data to the GPU every time I construct a Variable
class Variable(autograd.Variable): def __init___(self, data, *args, **kwargs): data = data.cuda() super(Variable, self).__init___(data, *args, **kwargs) a = torch.randn(1,1) print(a) print(Variable(a)) print(Variable(a.cuda()))
However, I got the output as follows:
-0.2344 [torch.FloatTensor of size 1x1] Variable containing: -0.2344 [torch.FloatTensor of size 1x1] Variable containing: -0.2344 [torch.cuda.FloatTensor of size 1x1 (GPU 0)]
Variable(a) would return
torch.cuda.FloatTensor, but I got
Does anyone get the same problem?