How to concat 2 tensors?

This question has been asked with slight variations but I could not really get an answer from my (simple) problem.

all_labels = torch.Tensor([]) # torch.Size([0])
for batch_idx, (images, labels) in enumerate(train_loader, 1):
    all_labels = torch.cat((all_labels, labels)) #labels is torch.Size([32])

RuntimeError: Expected object of scalar type Float but got scalar type Long for sequence element 1 in sequence argument at position #1 ‘tensors’

What is happening here? I would like to understand the meaning of this error message and the reason of the problem. I basically just want to append the labels for each batch to the all_labels.

If you are calling torch.Tensor (uppercase T in Tensor), you are creating an empty FloatTensor.
You could avoid this error using by defining the empty tensor as a LongTensor:

all_labels = torch.tensor([]).long()
for _ in range(5):
    all_labels = torch.cat((all_labels, torch.tensor([1])))

However, I would recommend to store the labels in a plain list and then convert it to a tensor afterwards, which should be faster than your current approach.

1 Like

Thanks a lot! I am also leaving this link which I found very interesting https://jdhao.github.io/2017/11/15/pytorch-datatype-note/

I’m having a similar issue … but can’t seem to get my code to work (trying your suggested approach). (trying to implement Linear Probe: GitHub - openai/CLIP: Contrastive Language-Image Pretraining)

text = []
for example in dataset:
text.append(example[“label”])
text = torch.cat(text).cpu().numpy()

results in “TypeError: expected Tensor as element 0 in argument 0, but got numpy.int64”

text = []
for example in dataset:
text.append(example[“label”])
text = torch.tensor(text)
text = torch.cat(text).cpu().numpy()

Results in “TypeError: cat(): argument ‘tensors’ (position 1) must be tuple of Tensors, not Tensor”

Thoughts?

Based on your second code snippet it seems text is already a tensor, so I’m unsure, why you would want to use torch.cat on it again.
Anyway, the inputs to torch.cat should be a tuple of tensors, so you would have to use torch.cat((text,)).
However, this won’t change the shape of text, since it’s already a single tensor.