GPU Memory Usage Accumulates

Hi!

I am using FasterRCNN from torchvision to perform validation. Everything worked fine until I tried to store the predictions of the model to an array. I am getting only 10 predictions per image and I have 120 frames. Plus, I transfer all the variables to the cpu and store them there. However, at each iteration (i.e. after processing each frame) the GPU usage keeps accumulating by 800 MB per frame. Thus, I run out of memory.

Does anyone have any idea as to how can I store the prediction scores in a delicate way or a fix to my way of solving this ?

A sample piece of code is below:

for frame in video:
    predictions[i] = model(frame)
    for key, value in predictions[i].items():
        predictions[i][key] = predictions[i][key].to('cpu')

Best.

1 Like

Wrap your code in a with torch.no_grad() block, if you don’t need to call .backward() in this part of the code or detach the tensor from the computation graph, so that it can be freed via:

predictions[i][key].detach().to('cpu')

Even though you are pushing the prediction to the CPU, the attached computation graph, which is still on the GPU, will be kept alive.

1 Like