Both S0 and S1 tensors size is [12, 16, 64]
from following for loop, I want to give a result in [12, 16*16, 64] size, however, the result is [12, 16, 17, 64]. What kinds of modification I should apply?
S2 =[]
for i in range(0, 16):
S2.append(torch.cat((S0[:,i,None], S1), dim=1))
S3 = torch.stack(S2,1)
Thanks @Intel_Novel. But the point is: each cell of S1 should concatenate with whole cells of S2. As S1 has 16 cells in total, process should take 16 steps.
I don’t get it. S1 is a tensor. S2 is a tensor. Maybe you should copy paste the original task.
If you have a problem with the number of steps, 15 steps in my solution and 16 you need you should consider starting from the empty tensor S2.
I am novice in PyTorch. Sorry for the low quality answer.
Actually, they are feature maps (with 4x4 grids: 16 cells).
i=0: first cell of S0 needs to concatenate with whole of 16 cells in S1, then appended in S2.
i=1: second cell of S0 needs to concatenate with whole of 16 cells in S1, then appended in S2.
…
i=15: 16th cell of S0 needs to concatenate with whole of 16 cells in S1, then appended in S2.
that’s why I used " torch.cat((S0[:,i,None], S1), dim=1)"
?
The right way is to unsqueeze and expand on the S1 and then use a single cat.
Regarding the things above: It’s not applicable in your situation anyway, but you should never(!) incrementally cat results. It’s quadratic in complexity for something that should be linear. If you do something like that, append to a Python list and feed it to cat at the end.