Out of memory when I use torch.cuda.empty_cache

I want to release GPU memory, so I use torch.cuda.empty_cache, but it throw a error that
file ‘python3.6/site-packages/torch/cuda/init.py’, line 426, in empty_cache
RuntimeError: CUDA error: out of memory.

If I do not use torch.empty_cache(), everything work well. There is enough memory in GPU, but once I use torch.cuda.empty_cache, and it throw this error. Can someone tell me the reason?


This issue sound a bin weird.
Are you able to create a dummy tensor on your GPU before empty_cache raises this OOM error?
How much memory is being used before the call?

I solve this problem,
The reason is , torch.cuda.empty_cache() write data to gpu0 (by default) ,about 500M
when I meet this problem, my gpu0 was fully occupied. so if I try

with torch.cuda.device('cuda:1'):

no memory allocation occurs on gpu0.


Thank you, Wu,
This saves my day !!!