Function to limit memory cached

Hello,

Is there any function available to limit the memory that is cached?

Thanks,
Mahendra S

Hi,

You mean GPU memory?
If so no there is not unfortunately.
Why would you need such function?

I’m running a resized Unet inference. The max input size it would accept is 1280x1280, for this pytorch caches around 4.5 GB of memory. At some edge cases like 505x960 (not a good aspect ratio)input is given the cache memory used is 7.8 GB. This doesnt happen often.

I’m running 2 workers, GPU has 16 GB of memory. I get OOM for some edge cases as mentioned above.

We do not have such feature at the moment.

The main limitation I recall for this is that the CUDA driver uses a significant amount of memory that we cannot track. Thus the limit set by the user my not be respected because of this extra memory. This makes the feature much weaker as it has some unexpected behavior for the end user.
If you have a proposal to overcome this issues, please feel free to open a feature request on github !