Deciding on image size vs model size

I am trying to train a 3D resnet-18 \ resnet-34 \ resnet-50 model similar to the model in here:

Yet I want to use the largest image I can fit in 8 - GB of GPU RAM (Nvidia RTX 2080).

it there a quick way to figure out this rather than just trying? I have tried images of size 646464 and it worked. yet I just need to know what is the largest size I can do

thank you

@ptrblck any idea peter? Thanks in Advanced

I don’t think there is a reliable way to estimate the memory usage properly.
Especially, if you are using cudnn in the benchmark mode (via torch.backends.cudnn.benchmark = True), as cudnn will profile various algorithms for your use case and will select the fastest, which fits into your device. E.g. a faster algorithm might be picked, if you have enough memory left, and will be skipped, if not enough memory is available.
This could mean that the memory usage won’t scale with the increase in input size as expected.

Of course you could try to calculate a theoretical memory usage, but I would just run some tests using different shapes to get the estimated memory usage.

Hi @Naif40, as explained by @ptrblck, you could use torchsummary for example, which will give you some estimate of gpu usage in MB.

Hence, you could multiply from your current image size (CMIIW, maybe what you mean is 64x64 or 3x64x64 with format Channel x Height x Width):

  • i.e. 3x 64 x 64, let’s say the output of torchsummary is 1000mb
  • to know your limit is: 8000 / 1000 = 8. It could be used to increase your image size into: 3x181x181 (sqrt of 8 is ~2.82).

But again, this is an estimation, and Pytorch should prevent you to use 100% of your gpu resources due to GUI utilization or other processes etc.
So, use it with keeping those things in mind. Hope it helps, cheers~