x -> torch.Size([1, 512, 80, 80])

y -> torch.Size([1, 1024, 40, 40])

z -> torch.Size([1, 2048, 20, 20])

t = concatenate(x,y,z) # for example

x -> torch.Size([1, 512, 80, 80])

y -> torch.Size([1, 1024, 40, 40])

z -> torch.Size([1, 2048, 20, 20])

t = concatenate(x,y,z) # for example

Solved here !

thank you ,i try this but not solved

RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 2048 and 512 in dimension 2 at C:/w/1/s/tmp_conda_3.6_095855/conda/conda-bld/pytorch_1579082406639/work/aten/src\THC/gene

ric/THCTensorMath.cu:71

```
x = torch.Size([1, 512, 80, 80])
y = torch.Size([1, 1024, 40, 40])
z = torch.Size([1, 2048, 20, 20])
print(torch.cat((torch.tensor(x),torch.tensor(y),torch.tensor(z))))
print(torch.stack((torch.tensor(x),torch.tensor(y),torch.tensor(z))))
Outputs
tensor([ 1, 512, 80, 80, 1, 1024, 40, 40, 1, 2048, 20, 20])
tensor([[ 1, 512, 80, 80],
[ 1, 1024, 40, 40],
[ 1, 2048, 20, 20]])
```

1 Like

Thanks,When I say torch size, I say the size of the array.

1 Like

Hi,

It depends what axis you want to use for concatenation. Literally, you cannot do this as the tensors do not have the same size.

You have to have same shape for all dims except the one you one to use as the concatenation dimension. You can achieve it using padding by repetition, zeros, etc. For repetition you can use `torch.expand(size)`

but for other methods such as interpolation, you need to use `torch.nn.functional.interpolation`

.

Personally, first I would make the dim=2 and dim=3 (last two dims) same size using `F.interpolate`

then expand smaller tensors `x`

and `y`

by repetition using `torch.expand`

.

Expand: Concat two tensors with different dimensions

Interpolation: Resize tensor without converting to PIL image?

Edit1: replace wrongly used `pad`

instead of `interpolate`

Bests

2 Likes