Torch.cuda.memory_allocated() returns 0 if PYTORCH_NO_CUDA_MEMORY_CACHING = 1

There are clearly tensors allocated in my GPU memory. When I turn PYTORCH_NO_CUDA_MEMORY_CACHING enviroment variable back to 0 it works seemingly fine. Is this a bug?

I’ve read pytorch documentation on memory management but I still don’t understand.

Disabling the caching allocator is a debugging feature and some utils won’t work, such as CUDA Graphs. You could suggest a fix in case you are interested to see the used memory stats.

1 Like

Should I use torch.cuda.mem_get_info() in this case? What’s the difference ? @ptrblck

torch.cuda.mem_get_info() uses the CUDA runtime API and reports the free and total memory of the specified device, which will include other processes besides the PyTorch script as well. You can of course use it, but have to make sure to read the results right and to subtract the memory usage from other processes. Also, no caching information will be given, but you are already disabling it explicitly.

1 Like