Delete model from GPU/CPU

Hi
I have a big issue with memory. I am developing a big application with GUI for testing and optimizing neural networks. The main program is showing the GUI, but training is done in thread. In my app I need to train many models with different parameters one after one. To do this I need to create a model for each attempt. When I train one I want to delete it and train new one, but I cannot delete old model. I am trying to do something like this:

del model
torch.cuda.empty_cache()

but GPU memory doesn’t change,

then i tried to do this:

model.cpu()
del model

When I move model to CPU, GPU memory is freed but CPU memory increase.
In each attempt of training, memory is increasing all the time. Only when I close my app and run it again the all memory is freed.

Is there a way to delete model permanently from GPU or CPU?

I cannot reproduce the issue using a recent master build:

model = models.resnet18()
print(torch.cuda.memory_allocated())
> 0

model.cuda()
print(torch.cuda.memory_allocated())
> 46861312

del model
print(torch.cuda.memory_allocated())
> 0
2 Likes

i have this issue as well, but if you this

model = model.to(‘cuda’)

it will free the gpu memory when you delete the model

1 Like

How do you clear models from a list? del doesn’t work :frowning:

For reference- check my colab notebook: Google Colab

@Surya_Narayanan when I run your supplied code, the last check of allocated memory (print(torch.cuda.memory_allocated()/1024**2, '3')) gives me 0.0 3. Is this not what you got?