Equivalent of ignore_index in a continuous valued loss

Hello everyone,

I have continuous valued data of variable lengths for training a convolutional auto-encoder.
Something similar to having e.g. images of variable width.

I read that NLL loss has an ignore_index which allows part of the target not to contribute into the loss gradient. In the case of categorical data, this would allow padding all data to the maximum length (e.g. 300 points) with the ignore_index value in order to ommit the padding in the loss.

Would there be any efficient way to implement something similar for continuous targets please (e.g. using MSE loss) ?
rather than a for loop over each batch element to crop and compute the loss individually
it could be defining a target continuous value to ignore, or something less numerically sensitive like a floor value under which all values are ignored (e.g. my data is in [-1,1] and I could set ignore value to < -1 and pad with -1.5)

I imagine I could bin the values (e.g. mu-law) but before that I am interested of any way not to go through binning

Second question, let’s say I want to train a VAE which will also have a regularization of the encoding.
Because the encoder is passed over the padded data (as the input), that latent loss will be computed taking into account the padding. Is there any way to prevent some input pixels not to contribute to any gradients (neither the regularization nor the reconstruction loss).

I am curious of your suggestions ! Right now I feel it’s either not really feasible, or only applicable to training with a batch_size = 1 and some latent pooling that is invariant to input size …

Thanks !