How could I release all the memory without kill the process

Hi,

I have trained a model, and then I implement inference with it. After the first inference, the model takes a large amount of memory. Then even though I no longer feed inputs to it, it still takes up these memories. I tried torch.cuda.empty_cache, but it still cannot shrink the memory usage to the amount before the first inference. How could I unload the model and keep the process going when no input is fed, and reload it once new inputs are available?

you can send your model on the cpu with .cpu() and then call .cuda() on it to send it back on the gpu when you need it there.

The memory still cannot be collected completely.
my test code is like this:


import torch
import torchvision


net = torchvision.models.resnet101()
net.cuda()
net.eval()

_ = input()
inten = torch.randn((32, 3, 224, 224)).cuda()


for i in range(10):
    out = net(inten)
    print(out.shape)


_ = input()
net.cpu()
torch.cuda.empty_cache()
_ = input()

It takes around 2G before the first input, then the memory boosts to 11G before the second input. After executing net.cpu() and torch.cuda.empty_cache(), there are still around 6G memory used. How could I resume the 2G state ?

Well inten is still on the gpu and out well.
You might want to wrap things in a function to reduce the number of local variables that stay around.

I tried, the memory usage drops to around 4G, but still more than the very beginning of around 1G. Could there be more optimization?

Could you share a full code sample that we can run locally that reproduce the problem please?

Hi,
This is my test code:


import torch
import torchvision




def fun():
    net = torchvision.models.resnet101()
    net.cuda()
    net.eval()

    _ = input()
    inten = torch.randn((32, 3, 224, 224)).cuda()
    for i in range(10):
        out = net(inten)
        print(out.shape)


    _ = input()
    net.cpu()
    inten = inten.cpu()
    torch.cuda.empty_cache()

fun()
_ = input()


The torch.cuda.empty_cache() is not placed properly. When you call it, the net has been moved out but not out and inten. You need to call it after the function call as it is when exiting the function call that these will be cleaned out properly.

fun()
torch.cuda.empty_cache()
_ = input()
1 Like