How to deal with <PAD> in a self-defined autoencoder

I have a batch of tokens and I have padded them into the same length. How could I ignore the loss of pad token in my auto-encoder?

The encoder and decoder look like:

 self.encoder = nn.Sequential(
            nn.Linear(200, 100),
            nn.Tanh(),
            nn.Linear(100, 50),
            nn.Tanh(),
            nn.Linear(50, 25)
            )
self.decoder = nn.Sequential(
            nn.Linear(25, 50),
            nn.Tanh(),
            nn.Linear(50, 100),
            nn.Sigmoid()
            )

Hi,
I’m not sure if it would actually make sense to pass padded inputs to your auto - encoder.
The result depends also on the padding values in the input.
Depending on what you’re task is, you might want to consider using something like a LSTM Encoder - Decoder.

Use this…

criterion = nn.CrossEntropyLoss(ignore_index = PAD_IDX)

where PAD_IDX is the pad token