LSTM with variable feature size

How can one feed variable size input to an LSTM layer? So I need the input to be : [batch, timestep, FEATURES], where FEATURES varies from example to example. Since I will be getting new examples constantly (model will have to do on-line training) padding is not an option.

That’s just not how LSTMs work - they use a linear layer (or two) under the hood and that won’t cope with varying feature size. You could, of course, manually reimplement LSTM (there also is one in the https://github/pytorch/benchmark repository) and replace the linear with convolutions or so. I think this has been proposed somewhere in the literature. But all of this will leave the territory covered by PyTorch’s LSTM implementation.

Best regards

Thomas

Actually you can:

batch_size = x.size()[0]
x = torch.reshape(x, (batch_size, -1, 1))
layer, _ = self.LSTM(x.float())

it’s a workaround but it seems to do the job.

Ouch, my eyes hurt!
More seriously: If that achieves what you need, great, but you have now used variable time length and a feature size of 1, no?

but you have now used variable time length and a feature size of 1, no?

Yes.

Ouch, my eyes hurt!

Sorry about that, lad :slight_smile: