Why classifier might fails when training on latent vectors?

I want to use latent vectors from my autoencoder (which provides with good reconstruction results) for classification problem, but I have troubles classifying my data so far. I tried following:

method №1
I have saved latent vectors as .npy files and was running this classifier on them, but my model started to overfit very fast (giving me 80-90% accuracy on a train set, but 5-10% on validation), though I have 50k latent vectors. But at least it was good on the train set

method №2
I was trying to use saved autoencoder model with the same classifier for the same purpose (because to keep all the vectors as .npy takes too much storage). So i loaded encoder part (and froze all layers with param.requires_grad = False as I only need to produce latent vectors but not to train autoencoder) and put only classifier to optimization: optim.SGD(clf.classifier.parameters(), lr=0.001, momentum=0.9), I also added clf.classifier.train() and clf.encoder.eval(). But this model simply wouldn’t work - it just predicts mainly one class for all vectors in a batch that gives maximum 16% accuracy on train set

Where possibly did I do wrong? And what are the ways to make a classifier work?