so if first_tensor and second_tensor would be of size [5, 32,32], first dimension would be batch size, the tensor third_tensor would be of size [10, 32, 32], containing the above two, stacked on top of each other.

How can I do this with torch variables? Or ar least with torch tensors?

Thanks, I wrote my own function that transforms it to numpy.array and then uses numpy.concatenate, then transforms backâ€¦ not sure how slow this is though.

Cool! but you wonâ€™t be able to backpropagate by converting to numpy and back (if you so desire). But torch.cat would allow torch autograd to do its â€śmagicâ€ť . Not sure about the performance issue between the two, but I guess the native way of concatenating would be faster.

@Bixqu be also mindful of this property x = torch.cat([y, z]).view(-1)
.view is a special parameter that can change the shape of the tensor without changing its contents.