PyTorch VAE (Variational Autoencoder) example not training (no meaningful model returned)

I tried to implement this VAE example using PyTorch: https://github.com/pytorch/examples/blob/master/vae/main.py All the parameters were left as default.

But the “learned” autoencoder is not meaningful. It produces a blurred image looks like the mean of the MNIST training set regardless of the input. Please help me figure out what went wrong during training. Thank you!

I could not generate convincing pictures either with this code. (Honestly, I did not wait very long for the training convergence ~ half a day on my poor old machine). However, I would not expect this code to generate incredible pictures. It’s just an example that rather gives you a cue of how such an architecture can be approached in Pytorch.

You can try this version. It trains OK and can generate images by vae.sample(). However, you might want to play with the learning rate.

I’ve balanced the losses and added some visualisation code here: https://github.com/pytorch/examples/pull/226 . See if this works out better for you.

Hey,

I guess it’s been a long time and you must’ve found the solution, but here’s my implementation that deals with the size_average problem.

I tried it over images from the CarRacing environment in gym, and it works pretty well.
Let me know if it’s useful to you !