Pytorch train network with gpu memory free problems

I use pytorch train network with same batch size 32. But network input size were different, such as 320x320, 416x416, 512x512. I use command “nvidia-smi -l” to watch gpu memory occupation. Gpu memory was
cost with max input size 512x512, and never reduced when input size was 320x320 or 416x416. How to release the unused memory cache?

Hi,

We use a special caching allocator that is much faster than the regular allocator. If you don’t need that memory, I would advise against clearing the cache. If you really need that memory, you can use cuda.empty_cache() to release it.

1 Like