Unable to allocate cuda memory, when there is enough of cached memory

Apologies for resurrecting this - I am having the same issue regularly. I get the RuntimeError, as in the first message of this thread, the first time I send any data to the GPU.

I have exclusive access to the GPU, so I could solve my issue if I could force the GPU memory to be cleared or freed. Is there a function in torch which I can use to do this? I’ve reviewed the information about memory management on the docs here and I’m not entirely sure that torch.cuda.empty_cache() will resolve this.

An ideal solution for me would look something like:

...
torch.cuda.clear_memory_allocated()  # entirely clear all allocated memory
model = model.to(device)
...

Any advice well received.

1 Like