`torch.backends.cudnn.benchmark = True` is right choice for training images with various sizes?

My dataset contains images with various sizes. Mostly cases are ‘1024768’, '800536’, etc.
I find that it takes quite long time for the first epoch, about 10x longer than the training time of other epochs.
However, since the second epoch, training time of each epoch tends to be stable.

If I set cudnn.benchmark=False, training time goes up about 20% in my case.

I wonder whether the gaining of speed is achieved at the cost of performance degradation(Just because my training images vary in size).

1 Like