Hi,

I have a dataset composed of sequences of different length.

For training an LSTM network I have configured the dataloader to reshape each batch with shape: [batch, sequence, channels], where number of channels is 1, so [batch, sequence, 1].

Then I have used the pad_sequence and pad_packed_sequence to prepare the batch input to the network. Until here everything it is fine.

Now I have tried to fed the same dataset to a 1DCNN with 1 channel as input. I have changed the batch shape as [batch, 1, sequence] to fit the requirements of the 1DCNN as input, however when I try to pack the sequence it gives me this error:

“RuntimeError: The size of tensor a (86) must match the size of tensor b (95) at non-singleton dimension 1”

How can I solve?

Thanks

It’s tough to determine without seeing your code. Can you make a short reproducible code sample?

Thanks for the reply, I managed to found a viable solution. I post it here, so that maybe can be helpful for someone:

```
final_tensor = torch.zeros(self.output_size)
final_tensor[0, :len(original_tensor[0])] = original_tensor
```