Torch.stack and device

I want to stack list of something and convert it to gpu:

torch.stack(fatoms, 0).to(device=device)

As far as I know, tensor was created on cpu firstly and then would be transferred to specified device. How to put it on gpu straight?

2 Likes

If both tensors are already on the GPU, the result will also have the save device:

a = torch.randn(10, device='cuda:0')
b = torch.randn(10, device='cuda:0')

c = torch.stack((a, b))
print(c.device)
> device(type='cuda', index=0)
2 Likes

yes, but if fatoms (look at my example) is list?

If your fatoms list contains CPU tensors, then your example is the way to go.
You could push the content onto the GPU before calling torch.stack, which is what my example shows.