Move tensor to the same gpu of another tensor

Assume I have a multi-GPU system. Let tensor “a” be on one of the GPUs, and tensor “b” be on CPU. How can I move “b” to the same GPU that “a” resides?

Unfortunately, b.type_as(a) always moves b to GPU 0.

Thanks.

1 Like

Tensor.new will do exactly that.

Thanks @ptrblck. The problem I have with the “Tensor.new” function is that

  1. If “a” is on GPU and “b” on CPU, then “a.new(b)” does not work (error: …constructor received an invalid combination of arguments…). “a.new(b.numpy())” works though, but I am afraid that it is inefficient.

  2. If “a” and “b” are already on the same device, then “a.new(b)” will unnecessarily create a new copy of “b”

I am looking for a function like “b.type_as(a)”, but could automatically move the data to the same device as “a”.

As far as I understand, you would like to move b to the same device as a.
This should work:

a = torch.randn(10, 10).cuda()
print(a)
b = a.new(a)
print(b)
c = a.new(10, 10)
print(c)
1 Like

@ptrblck The problem is that “b” does not necessarily have the same shape and type of “a”. For example “a” could be 10⨉10 float tensor, while “b” 13⨉19⨉23 int Tensor.

You can pass the standard arguments to new as to a new Tensor:

a = torch.randn(10, 10).cuda()
print(a)
b = a.new(13, 19, 23).long()
print(b)

Would this work?

EDIT: You could of course pass a numpy array or something else to the constructor.
Could you post your use case? I have the feeling, I’m not really understanding your problem and thus posting useless approaches. :wink:

2 Likes

Thanks @ptrblck. I have a huge project, and in most places, in order to move the data to the proper device I used “type_as”. Then I wanted to run several instances of that program on different GPUs of the machine at the same time. The problem was that type_as always uses GPU 0. Right now I am using the approach explained here Select GPU device through env vars, and it solves my problem.

This may be new functionality from the Tensor API, but to move tensor a to the device of tensor b I use:

a = a.to(b.device)
11 Likes

The following also works irrespective of device:

a = a.to(b.get_device())

I think you need to do b = b.to(a.device) in order to move b to the same GPU that a resides. Otherwise you will do the reverse.