How can we release GPU memory cache?

Hi,

torch.cuda.empty_cache() (EDITED: fixed function name) will release all the GPU memory cache that can be freed.
If after calling it, you still have some memory that is used, that means that you have a python variable (either torch Tensor or torch Variable) that reference it, and so it cannot be safely released as you can still access it.

You should make sure that you are not holding onto some objects in your code that just grow bigger and bigger with each loop in your search.

48 Likes