How to develop a module with Parameter adaptive to torch.**Tenor and torch.cuda.**Tenosr?

I occurs such a problem when I try to development a module with Parameter. I have to initialize the Parameter with torch.**Tensor or torch.cuda.**Tensor, so how can I construct a mudule that is adaptive to cpu and gpu Tensor?

Such as:

For cpu:
x = torch.FloatTensor(2,3).cauchy_()
para = Parameter(x)

For gpu:
x = torch.cuda.FloatTensor(2,3).cauchy_()
para = Parameter(x)

It is better let it be adaptive because we have to use the para for computation, and we have to make every
variable to be cup/gpu tensor.

So could someone help me?

Hi,

For modules, you should save your parameters as self:
self.weight = Parameter(torch.rand(weight_size))

This way when you will call .cuda() on your model, they will be moved automatically to the gpu with all other parameters.

Yes, at beginning, I call . cuda() too, but I ignore such a problem. For example:

#let module name: Exam
x = Parameter(torch.FloatTensor(2,3).normal_())
y = torch.rand(2,3)
z = torch.mm(x, y.t())

this is, in module, we sometime defines some constants and compute it with the Paraeter. Even call Exam.cuda(), a runtime error will occur:
RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #2 ‘other’

So we have to write:
y = torch.rand(2,3).cuda()

So how can I solve this kind of problem?

If you want to keep what we call buffer tensors, you can as well. Simply save them into self.my_buffer = torch.rand(2,3) in the __init__. They will be moved to the gpu the same way as the parameters when you call .cuda() on the module.

If you want to do this during the forward pass, you can give device=input.device as a keyword argument to most function to create tensors (like rand, zeros, ones…).

Excelent ideas, thanks a lot. I think it will work. :grinning: