Changing image sizes after each batch

Hi!

I have a dataset of 1000x1000 resolution images. I want to use these to train a Fully Convolutional Network (FCN). But I want my network to be robust to multiple sized images, so I want to train it on multi-resolution images. So in my Dataset Class in getitem() I have some code that randomly resizes the image. So this will work when the batchsize is 1. (When batchsize is higher, different resolution images will be in the same batch: impossible). But this is way too slow. I want to utilize my multiple GPU’s and use a higher batchsize.

So I want to create batches with the same resolution images. And have a different resolution per batch. I can’t really find out where Pytorch creates these batches, so I can’t implement this. Any ideas?

I will just change the image size every epoch.

I was thinking using the following code in dataloader might help.
RandomCrop will randomly crop the images and the random.randint will give different size for each batch.
I have not written the entire code though.

import random
import torchvision.transforms as tf
img = tf.RandomCrop([random.randint(180,224),random.randint(180,224)])
1 Like