MLP autoencoder with attention

Does anyone have a PyTorch example to show the implementation of the (self-) attention mechanism using MLP autoencoder rather than RNN autoencoder? http://ufldl.stanford.edu/wiki/index.php/Autoencoders_and_Sparsity