Effective Batching with images of different shapes in PyTorch

Say that from an image folder with 9k images I have 4k images of size (100,400) , 2k images of size(150 ,350) and the rest have a size of (200 , 500) …How can I train with such data like how would I define collate_fn to handle such images and effectively create batches of same sized images ? If I preprocess and save 3 hdf5 files for three different sized images how would I use all three in Dateset and collate_fn or is there a way that a single hdf5 file can store all these three subsets of data without using padding ?