I want to stack list of something and convert it to gpu:
torch.stack(fatoms, 0).to(device=device)
As far as I know, tensor was created on cpu firstly and then would be transferred to specified device. How to put it on gpu straight?
I want to stack list of something and convert it to gpu:
torch.stack(fatoms, 0).to(device=device)
As far as I know, tensor was created on cpu firstly and then would be transferred to specified device. How to put it on gpu straight?
If both tensor
s are already on the GPU, the result will also have the save device:
a = torch.randn(10, device='cuda:0')
b = torch.randn(10, device='cuda:0')
c = torch.stack((a, b))
print(c.device)
> device(type='cuda', index=0)
yes, but if fatoms
(look at my example) is list?
If your fatoms
list contains CPU tensor
s, then your example is the way to go.
You could push the content onto the GPU before calling torch.stack
, which is what my example shows.
is there a way to cost-effectively push a list of tensors to gpu?