Neural Network with variable size input

Hello,

I’m wondering if there is a pretty straightforward way (possibly this is trivial) to implement a neural network in PyTorch with a variable number of input features.

Many thanks,

Federico

It depends on the model architecture and which dimensions are variable.
E.g. if you are working with a CNN the spatial size can be variable and you would usually use an adaptive pooling layer to create a defined activation shape before feeding it to the first linear layer. Linear layers accept a variable input shapes as [batch_size, *, in_features], RNN layers accept a variable sequence length etc. so you would need to give a bi more information about your use case.

2 Likes

Thanks, what about a standard fully connected neural network?

Hi Frederico!

Focusing on your word “standard,” no, a fully-connected network will
not be able to accept a variable number of input features.

Let’s say you have an input batch of shape [nBatch, nFeatures]
and the first network layer is Linear (in_features, out_features).
If nFeatures != in_features pytorch will complain about a dimension
mismatch when your network tries to apply the weight matrix of your
first Linear to the input batch.

If you want to do something like this you will have to pad your input
feature vectors to all have the same length or similarly use “dummy”
values for “missing” features in the shorter feature vectors.

Bear in mind that feature[7] in one sample has to have the same
meaning as feature[7] in another sample – otherwise your network
won’t be able to “learn” what feature[7] means. Therefore any
scheme you use to “pad” feature vectors to the same length has to
make sure that features with the same meaning end up in the same
location in the input vectors.

Best.

K. Frank

1 Like

Many thanks to both!