Adding Noise to Decoders in Autoencoders

Hi,
I am a little confused about how I can add random noise to decoders of the autoencoders. It can be imagined that there are two inputs to the decoder, one is the output of encoders, and one is random noise.
Please help.

Thank you

I suppose you need to add noise to the encoder output before feeding it to the decoder. If that is so, then the you could add Gaussian noise as follows -

output = Encoder(input)
output = output + torch.sqrt(desired_variance) * torch.randn(output.shape)
model_output = Decoder(output)

Thank you for the response. I am trying it now.
I want to ask one thing about how to choose the desired_variance value?
Since I am getting error “TypeError: sqrt(): argument ‘input’ (position 1) must be Tensor, not float”.
So the desired_Variance has to be a tensor with different values?

Just to be more clear, I have attached a picture to show what I wanted.

I want to ask one thing about how to choose the desired_variance value?

This would depend on how noisy data would you like to train on. Too noisy data would lead to garbage features and might do more harm. It would ultimately be set heuristically, however need more context to set it.

By the picture I would assume you want to add noise to the encoder output and feed it to the decoder

torch.sqrt would require a tensor.
Do the following -

output = outuput + (desired_variance ** 0.5) * torch.randn(output.shape)

Remember that if you do the following in the forward calculation, it will be included in the backprop. However, if you perform this step in the training loop, I think it will still be taken in consideration, so you might want to keep that in mind