# How to add tensors of variable length to feed to Linear layer?

I trying to add tensors of different length and feed to Linear Layer

``````input1 = torch.randn(4, 40, 56, 56).view(-1, 4)

input2 = torch.randn(4, 40, 55, 55).view(-1, 4)

input3 = torch.randn(4, 40, 54, 54).view(-1, 4)

output = input1 + input2 + input3

fc = torch.nn.Linear(62720, 3)

output = fc(output1)
``````

But getting the following error

``````RuntimeError: The size of tensor a (125440) must match the size of tensor b (121000) at non-singleton dimension 0
``````

simple way is fc(torch.cat((input1,input2,input3),-1)), memory efficient way is emulating nn.Linear layer: input1.mm(weight1.t()) + input2.mm(weight2.t()) + … + bias, where weights may be slices of nn.Parameter(output_size, total_input_size) in second dimension

For fc(torch.cat((input1,input2,input3),-1)) still getting error

``````RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 125440 and 121000 in dimension 0 at /pytorch/aten/src/TH/generic/THTensor.cpp:612
``````

Oh, linear layer works on last dimension, thus you can concatenate on last dimension and proceed. But in your code, tensor lengths vary before last dimension, thus you can’t do this. You can use .view(4,-1) or .view(4*40,-1) on inputs, but varying first dimension (treated as batch(=independent) dimension) makes no sense. Maybe you need permute(). Also, I’m not sure where 62720 comes from, replace it with concatenated length.