Hi I have 2 tensors, let’s say Image with size (batch,3,224,224) each, lets name it T1 and T2.
I want to concatenate the tensor in the channels dimension, means an output of (batch,6,224,224).
Now I want the output tensor will look like this (In the channel dimension): (1st channel of T1,1st channel of T2,2nd channel of T1,2nd channel of T2,3rd channel of T1,3rd channel of T2)
Instead of the regular concatenate that will give:
(1st channel of T1,2nd channel of T1,3rd channel of T1,1st channel of T2,2nd channel of T2,3rd channel of T2)
What is the fastest way to achieve this output? any Idea?
hi @ptrblck , but what if I have 2 tensors , let’s say Image with size (batch,55,224,224) and (batch,34,224,224), lets name it T1 and T2.
I want to concatenate the tensor in the channels dimension, means an output of (batch,89,224,224).
And it should be a regular concatenation .
I tried torch.cat((t1,t2),3) and got runtime error:invalid argument 0: Sizes of tensors must match except in dimension 3. Got 55 and 34 in dimension 1 at /pytorch/aten/src/TH/generic/THTensor.cpp:612
torch.cat uses the dim argument to define in which dimension the tensors should be concatenated.
In your case, you should use dim=1, since you want to concatenate in the channel dimension.
what if we want to do the same as per @barakb, where (1st channel of T1,1st channel of T2,2nd channel of T1,2nd channel of T2,3rd channel of T1,3rd channel of T2) , instead of regular concatenation for different channels like T1 containing 55 channels and T2 containing only 34 channels?
I would like to use the first 34 channels of T1 and second 34 channels of T2 to stack upon each other, 1 channel of T1 and 1 channel of T2 at a time.
after that the rest 21 channels of T1[which has 55 channels] should get concatenated to the resultant
T3 = concat(([T1 of 34 channels ,T2 of 34 channels] one channel at a time), 21 channels remaining in T1)