Should I use a list of object on GPU?

I just started to use Pytorch. My question is that for object o_i that is already on GPU (which means I used .to(device) already.), should I use a list of o1…on? Is it efficient? Thanks in advance.

For your object to be in the GPU, it should be a tensor. If you have multiple objects you would like to pass to the GPU, you could stack them in a multi-dimensional tensor. A minimal example is below:

t1 = torch.tensor(1) # tensor(1)
t2 = torch.tensor(2) # tensor(2)
t3 = torch.tensor(3) # tensor(3)
t_list = torch.cat([t1.view(1), t2.view(1), t3.view(1)])
Out: tensor([1, 2, 3])

The .view serves for the unique case of 0-dimensional tensors to make them 1D. For higher dimensions it can be omitted.

1 Like

Thank you. So what if the object is a neural network model? I use to(device) to the network model.