How to load custom pretrained word embedding in the new torchtext pipeline?

given that the new torchtext dataset and dataloading pipeline now involves extending the dataset class by our custom dataset class, using this how do we load pretrained word embeddings in the pipeline.

also how do we sort the dataset into batches of similar length to minimize padding ?