Training high-dimensional features


I have a vocabulary with a size of ~60,000. I have a dataset of document vectors with such a high dimension. I am training them on a neural network and I am unsure if it can really train such high-dimensional vectors in PyTorch.


Nothing stops you from doing it. :slight_smile:
Do you have any concerns or is your model not training at the moment?
You could certainly try to reduce the feature dimension, but you could also just try out different architectures and see how it goes.