I don’t think there is a reliable way to estimate the memory usage properly.
Especially, if you are using cudnn in the benchmark mode (via torch.backends.cudnn.benchmark = True), as cudnn will profile various algorithms for your use case and will select the fastest, which fits into your device. E.g. a faster algorithm might be picked, if you have enough memory left, and will be skipped, if not enough memory is available.
This could mean that the memory usage won’t scale with the increase in input size as expected.
Of course you could try to calculate a theoretical memory usage, but I would just run some tests using different shapes to get the estimated memory usage.
Hence, you could multiply from your current image size (CMIIW, maybe what you mean is 64x64 or 3x64x64 with format Channel x Height x Width):
i.e. 3x 64 x 64, let’s say the output of torchsummary is 1000mb
to know your limit is: 8000 / 1000 = 8. It could be used to increase your image size into: 3x181x181 (sqrt of 8 is ~2.82).
But again, this is an estimation, and Pytorch should prevent you to use 100% of your gpu resources due to GUI utilization or other processes etc.
So, use it with keeping those things in mind. Hope it helps, cheers~