CudaTensor Variable

I tried to wrapper autograd.Variable to send the data to the GPU every time I construct a Variable

class Variable(autograd.Variable):
      def __init___(self, data, *args, **kwargs):
          data = data.cuda()
          super(Variable, self).__init___(data, *args, **kwargs)
  
  a = torch.randn(1,1)                                                                                                                                                                                    
  print(a)
  print(Variable(a))
  print(Variable(a.cuda()))

However, I got the output as follows:

-0.2344
[torch.FloatTensor of size 1x1]

Variable containing:
-0.2344
[torch.FloatTensor of size 1x1]

Variable containing:
-0.2344
[torch.cuda.FloatTensor of size 1x1 (GPU 0)]

I expect Variable(a) would return torch.cuda.FloatTensor, but I got torch.FloatTensor.

Does anyone get the same problem?

Thank you!

Here’s a proper working such wrapper.

def Variable(data, *args, **kwargs):
    return autograd.Variable(data.cuda(), *args, **kwargs)

It works. Thanks a lot!