Is there a way to dynamically use image size and batch size when training?

Hello

I am going to use multiprocessing to do my training.

When learning, I want to train using various image sizes and batch sizes.

For example, when the image size is 512x512, the batch size is 4, when the image size is 256x256, the batch size is 8, and when the image size is 128x128, the batch size is 16.

I want to be able to train the image size dynamically in 512x512, 256x256, 128x128, etc. during the learning process.

For example, you could create a dataloader for each image size and put it in a list, and use multiprocessing.

This is just my idea and I don’t know what to do specifically.

Please let me know if there is a way to dynamically use image size and batch size during training.

Please help, thank you.

2 Likes

Good question. I would like to know that too. The only thing that I was able to come up with is creating batches 16x1x512x512 and then cropping them to the desired size. But it’s terribly inefficient.

You can set the batchsize with maxsize (e.g. 16) in Dataloader. Then during the training, if you achieve 16x3x512x512, you can split them into 4 groups as (4x3x512x512)x4. And optimize them four times.
If the input size is 16x3x256x256, you can split them into 2 groups.