Raft(small and large) - Memory usage in eval mode with pretrained weights

(Beforehand: i use pytorch currently CPU only)

Hi there. I am not completely new to PyTorch but I am new to dense optical flow with PyTorch. The problem, that I am stumbling across is the following.

I have a image sequence and want to determine the optical flow for all consequtive frames. Therefore I have oriented myself with the code at the very bottom of:

https://pytorch.org/vision/0.12/auto_examples/plot_optical_flow.html#sphx-glr-auto-examples-plot-optical-flow-py

Originally intended for generating GIFs. But should work for my intentions as well. I am using that structure on my own images and running into memory issues now.

My previous understanding was that the training of a network is the heavy lifting and the eval mode ‘just’ classifies based on the acquired weights. In my case, after each Iteration of the image pair loop, my 12GBs of RAM are filled further, ending with a SIGKILL after just 3-4 image pairs. (the same happens if i do batches of 2, just faster; or if i use raft_small, just slower).

What i dont understand:
I use pretrained weights and the eval mode. Shouldn’t the RAM clear after each ‘classification’/call of the model() function?

The results and everything else looks fine. Just the RAM gets fuller and fuller.

Hello Tim,

I had the same problem and after doing some research, I finally figured out why the memory usage remains high after switching to eval mode.

The first thing we have to understand is that the majority of the memory is occupied to save the gradients of the tensors. However, switching the model from train() to eval() changes merely the behavior of some certain layers like dropout and batchNorm, but it does not disable the gradient, i.e. it will still build a computation graph for the tensors, compute and save gradients.

To mitigate the high RAM usage, you can use torch.no_grad(), which disables the gradient computation, and thus, the memory consumption should be significantly reduced during the forward pass.

Hope it helps. If anything is wrong, please correct me. :slight_smile:

1 Like