Set weights/bias of linear layer based on condition

I’d like to add the ability to ignore padding indexes to nn.Linear, what is the best way to manually set the weights/bias of an nn.Linear to 0 when a certain value is passed into the layer from the input features?

In my specific application, I have an embedding for protein sequences that maps numbers (tokenized amino acids) to vectors, but I need to map those vectors to the size of my model. It seems natural to use nn.Linear to receive the tensor of shape (sequence, embedding_dimension) and output to shape (sequence, model_dimension), but I need to ignore padding. Specifically, the embedded padding token would be this vector (-5, -5, -5,-5, -5, -5,-5, -5, -5) (the embedding dimension is 9), so I want to set the weights/bias of nn.Linear to 0 whenever it encounters a -5.

If you are setting the parameters to zero in the module, all training will be lost.
Maybe the functional API approach would work, where you define your weight as an nn.Parameter and apply it via F.linear.
This would allow you to use a condition and either use your weight or any other dummy tensor.

Also, I’m not sure, if zeroing out the gradients in the module wouldn’t also work for the padding inputs.