How to convert a list of tensors to a Pytorch tensor?

Hi, I have a list of tensors of size 4 that I want to convert into a Pytorch tensor. I used
y = torch.tensor(x) where x is the list.
But I am getting the following error:
ValueError: only one element tensors can be converted to Python scalars
How do I resolve this?

Depending on what exactly you want, you’ll most likely want to use either stack (concatenation along a new dimension) or cat (concatenation along an existing dimension).

Here’s a quick example:

import torch

x = torch.tensor([1,2,3,4])
y = torch.tensor([5,6,7,8])
list_of_tensors = [x,y]

# will produce a tensor of shape (2,4)
stacked_0 = torch.stack(list_of_tensors, dim=0)

# will produce a tensor of shape (4,2)
stacked_1 = torch.stack(list_of_tensors, dim=1)

# will produce a tensor of shape (8,)
concatenated = torch.cat(list_of_tensors, dim=0)

1 Like

I tried applying torch.stack() but I am getting the following error now:
RuntimeError: stack expects each tensor to be equal size, but got [1, 64, 128, 128] at entry 0 and [1, 128, 64, 64] at entry 1
Can you please help me resolve this?

Stacking tensors along a new dimension requires them to be equally sized in all other existing dimensions. The best / “correct” way to make sure this prerequisite is satisifed heavily depends on your specific use-case, i.e., how you obtain these tensors in the first place, what they represent and so on

The tensors in the list are actually style features of an image extracted from 4 different layers of VGG-19 network, hence the differences in size. I actually need to concatenate these style features with a tensor of content features for which I need to convert the list into a tensor first, but I am unable to do so.

If this is all supposed to be happening as part of a new model forward process you should consider adding additional layers which take care of resizing the inputs such that they match. For upsampling the spatial dimensions you could use ConvTranspose2d, or the Upsample layer which performs interpolation rather than a learned transformation. In addition to that you would need to make sure that the number of channels (size of the first dimentions) also match, e.g. by applying respective linear transformations or setting the in_channels and out_channels value of ConvTranspose2d to whatever you need.