TypeError: conv2d(): argument 'input' (position 1) must be Tensor, not PackedSequence

I’m trying to apply a conv2d layer to a batch of sequence data where the sequences don’t have the same length. I tried passing a packed sequence but that doesn’t work…

import torch
import torch.nn as nn
conv1 = nn.Conv2d(1, 16, 5)

x = torch.randn([16,1580,201]) # Where x has dimensions (batch*sequence*feats)
lengths = torch.tensor(
    [1580, 959, 896, 881, 881, 881, 881, 881, 881, 881, 881, 881, 881, 335, 254, 219]
) 
# So only the first element in the batch has size 1580, the rest have been padded
 
x_pack = torch.nn.utils.rnn.pack_padded_sequence(x, lengths, batch_first=True)
        
x_pack = conv1(x_pack)
print(x_pack)

# ... then feed x_pack to an LSTM...

Does anyone know how to not apply the convolution on the padded values of the inputs? will I have to iterate though each element of the batch?

Thanks.

2 Likes