How to drop the nodes in specified positions of the layer


I’m trying to implement a fully connected network to do a classification task, while my training data contains some missing values and the positions of missingness are not fixed. For example:
[x1, x2, NaN, x4, x5] [y]
[x1, x2, x3, x4, NaN] [y]
[NaN, x2, x3, NaN, x5] [y]
I have tried to implement some methods to fill in the missing values but it will introduce bias after all.

To solve the bias problem, I’m planning to keep the missingness but drop the corresponding nodes that the missing values appear in the input layer so that the NaNs will not be fed into the network and cause errors, and the following layers of the network stay the same. I wonder if there are specific methods to achieve it in PyTorch, either by:

  1. Drop the node in a specified position of the layer, or
  2. Enable variable input size of the neural network.

However, these are just two possible solutions I can come up with, I would be appreciated if you could share with me other useful ways.