Does 'autoencoder' work?

Hi Guys, recently i saw a post on the forum related LSTM auto encoder, and want to play with it to have some fun. I tried a few different architectures ( lstm, covn1d, conv2d ) to model timeseries and image restoration. I found two common problems discussed on this forum, also other place such as stack overflow, but never seem to be sorted out.

  1. restored data is simply an average of the original input - a problem with my lstm and conv1d based AE;

  2. loss not decreasing - however, for a covn2d image autoencoder, it does reproduce an image similar to original.

just wondering if your guys noticed that same, and what is your solution ?

thanks
Feng

two sample problems :

  1. python - LSTM autoencoder always returns the average of the input sequence - Stack Overflow

  2. this auto encoder loss is not decreasing, but can restore images.
    Convolution Autoencoder - Pytorch | Kaggle

finally tried auto encoder with FNN and CNN, use MSELoss and adjusted weight decay, and learning rate, seems to work. Loss decreasing, and can restore image reasonable well.