Run on whatever gpu whose memory is larger than the max_memory_allocated of my model

Hi, I have a model trained on GeForce RTX 3090. The nvidia-smi shows that 17GB memory is occupied. In my model I use “torch.cuda.max_memory_allocated” and find that 10GB memory is allocated. I know pytorch will cache memory allocator. My question is if my model will run on a gpu whose total memory is lower than 17GB but higher than 10GB?

It would depend on how large your model is and how large your inputs and batch sizes were. That is a good amount of memory so you should be fine unless you are doing some huge vision problem.

Hi, Dwight_Foster, thank you for your answer. I will test on other gpus later.