Same loss patterns while training Convolutional Autoencoder

You are using F.binary_cross_entropy_with_logits as your loss function (criterion is not used in the training loop). Since you are using a nn.Sigmoid layer for the output of your model, you should use F.binary_cross_entropy instead or remove the nn.Sigmoid layer and keep F.binary_cross_entropy_with_logits.

Also some minor side notes:

  • Variables are deprecated since 0.4.0. If you are using a newer PyTorch version, just remove all Variables.
  • Usually you just call .zero_grad() once. Currently you are calling it on the model and the optimizer. I would remove the model call, as the optimizer already has all model parameters and will zero out all gradients.