How can I free GPU memory for a specific tensor?

Hello. I’m currently running a deep learning program using PyTorch and wanted to free the GPU memory for a specific tensor.

I’ve thought of methods like del and torch.cuda.empty_cache(), but del doesn’t seem to work properly (I’m not even sure if it frees memory at all) and torch.cuda.empty_cache() seems to free all unused memory, but I want to free memory for just a specific tensor.

Is there any functionality in PyTorch that provides this?

Thanks in advance.


Why do you want to free the memory associated with a single specific Tensor?
del foo will remove the link between the variable foo and the Tensor it contains. If nothing else uses the Tensor, it will be freed. But if other stuff use it (other views, or grad computation), it won’t be deleted right away.

1 Like

Hello, thanks for the reply. There’s no particular grandiose reason, but I just wanted to try and make a project as memory-efficient as possible. I noticed that there were certain variables that were only used once and not used afterwards, and thought that freeing up the memory for those tensors would be good. I tried the del foo, but I also realized that this simply “frees” the memory and doesn’t return it to the device.

I was initially confused because I wasn’t aware of the difference between freeing memory and returning it to the device. Perhaps my knowledge is a bit short, but “freeing and not returning the device” is essentially allowing for more memory usage, right?

Because the allocator from cuda is a bit slow, we have a custom one in pytorch that is much faster.
So when you free a Tensor, its memory is returned to the custom allocator that can then reuse it for other Tensors.