Is there a way to release gpu memory held by cuda tensors/variables?

I’m trying to run a pytorch script in ipython so that I can monkey around with the outputs of my computation graph. The problem that my gpu runs out of memory after I run the same script a few times, forcing me to exit and reenter ipython. In nvidia smi you can see the gpu memory decrease by the same amount after every run until it eventually reaches 0 and throws an error. Pytorch seems to be allocating new gpu memory every time the script is executed instead of reusing the memory allocated in previous runs. Is there a way to forcibly release all gpu memory held by pytorch in between script executions so that I don’t have to constantly exit and reenter ipython? Thanks!

I’ve tried %reset and unfortunately this doesn’t do the trick (not unexpectedly).

5 Likes

We haven’t exposed this functionality of flushing memory at the python level yet.

I want it too. Hope this task is running.

1 Like

The method is available now http://pytorch.org/docs/master/cuda.html#torch.cuda.empty_cache. Note that you need to delete all references to the variable you want to free before calling this. This requirement is due to safety concern. @jef

2 Likes

Well. At least in my pytorch version is not implemented

import torch
a=torch.cuda.FloatTensor(10,10)
del a
torch.cuda.empty_cache()
Traceback (most recent call last):
File “”, line 1, in
AttributeError: ‘module’ object has no attribute ‘empty_cache’

I have just installed the last pytorch version from the webpage. Python2.7 cuda8 and pip installation.

I think this should be quickly fixed as I am getting out of memory when running on a 8GB nvidia gpu. I have two networks sharing some parameters. The problem is I get out of memory after some batches… this does not make a lot of sense however as each batch I do same operation. Forward for network 1, forward for network 2 and backward. Each data from the data iterator is copied into a Variable so I can use the nn.Module, but I think this should not allocate more memory. After all the batches nvidia-smi does not show more allocation.

Thanks.

1 Like

Pip install is only the 0.2 version. If you look at the doc, the method is only available at >= 0.3. You will need to either build from source or wait for 0.3 release.

3 Likes

this function is very useful

Hi ,
I read your message. I would like to know if the exposed functionality of flushing memory is for C++ Libtorch developers . I am using Libtorch C++ and I cannot find a way to release ALL the CUDA GPU Memory used by a torch::nn::Module . Here I explained better an example : https://discuss.pytorch.org/t/release-all-cuda-gpu-memory-using-libtorch-c/108303

Thanks in advance .