Wrapping Variable for easy usage of cuda

I wrapped autograd.Variable to prevent missing .cuda() in a code.
Global variable (this is python’s variable) CUDA_ID is defined. I wish WrappedVariable is allocated to GPU of CUDA_ID.

import torch
from torch.autograd import Variable

CUDA_ID = 0

class WrappedVariable(Variable):
    def __init__(self, data, *args, **kwargs):
        print('before cuda', type(data))
        data = data.cuda(CUDA_ID)
        print('after cuda', type(data))
        super(Variable, self).__init__(data, *args, **kwargs)


if __name__ == '__main__':
    t = torch.ones(2, 2)
    print('before wrapping', type(t))
    t_var = WrappedVariable(t)
    print('after wrapping', type(t_var.data))

However this code print out below

before wrapping <class 'torch.FloatTensor'>
before cuda <class 'torch.FloatTensor'>
after cuda <class 'torch.cuda.FloatTensor'>
after wrapping <class 'torch.FloatTensor'>

Why was the tensor not allocated to the GPU?

It might be because the call to super().__init__ comes after the call to .cuda.
Here’s what I would try

class WrappedVariable(Variable):
    def __init__(self, data, *args, **kwargs):
        super(Variable, self).__init__(data, *args, **kwargs)
        print('before cuda', type(data))
        self.cuda(CUDA_ID)
        print('after cuda', type(data))