Concatenate two tensors in a "Scissors" manner

Hi I have 2 tensors, let’s say Image with size (batch,3,224,224) each, lets name it T1 and T2.
I want to concatenate the tensor in the channels dimension, means an output of (batch,6,224,224).
Now I want the output tensor will look like this (In the channel dimension): (1st channel of T1,1st channel of T2,2nd channel of T1,2nd channel of T2,3rd channel of T1,3rd channel of T2)

Instead of the regular concatenate that will give:
(1st channel of T1,2nd channel of T1,3rd channel of T1,1st channel of T2,2nd channel of T2,3rd channel of T2)

What is the fastest way to achieve this output? any Idea?

Thanks!

1 Like

This code should work:

a = torch.arange(0, 3).unsqueeze(0).unsqueeze(2).expand(2, -1, 4)
b = torch.arange(3, 6).unsqueeze(0).unsqueeze(2).expand(2, -1, 4)

c = torch.stack((a, b), dim=2)
print(c.view(2, 6, 4))

Note that I initialized each channel with a certain value, so that you can check the interleaved result.

Very Interesting!
Thanks!

hi @ptrblck , but what if I have 2 tensors , let’s say Image with size (batch,55,224,224) and (batch,34,224,224), lets name it T1 and T2.
I want to concatenate the tensor in the channels dimension, means an output of (batch,89,224,224).
And it should be a regular concatenation .

I tried torch.cat((t1,t2),3) and got runtime error:invalid argument 0: Sizes of tensors must match except in dimension 3. Got 55 and 34 in dimension 1 at /pytorch/aten/src/TH/generic/THTensor.cpp:612

Please help me with the above problem!
thanks!

torch.cat uses the dim argument to define in which dimension the tensors should be concatenated.
In your case, you should use dim=1, since you want to concatenate in the channel dimension.

thank you very much @ptrblck

what if we want to do the same as per @barakb, where (1st channel of T1,1st channel of T2,2nd channel of T1,2nd channel of T2,3rd channel of T1,3rd channel of T2) , instead of regular concatenation for different channels like T1 containing 55 channels and T2 containing only 34 channels?

How to solve this problem?
Thanks!

Would you like to use the first 34 channels only in this use case or how would you like to deal with the missing channels in T2?

I would like to use the first 34 channels of T1 and second 34 channels of T2 to stack upon each other, 1 channel of T1 and 1 channel of T2 at a time.
after that the rest 21 channels of T1[which has 55 channels] should get concatenated to the resultant
T3 = concat(([T1 of 34 channels ,T2 of 34 channels] one channel at a time), 21 channels remaining in T1)