Raft(small and large) - Memory usage in eval mode with pretrained weights

(Beforehand: i use pytorch currently CPU only)

Hi there. I am not completely new to PyTorch but I am new to dense optical flow with PyTorch. The problem, that I am stumbling across is the following.

I have a image sequence and want to determine the optical flow for all consequtive frames. Therefore I have oriented myself with the code at the very bottom of:

https://pytorch.org/vision/0.12/auto_examples/plot_optical_flow.html#sphx-glr-auto-examples-plot-optical-flow-py

Originally intended for generating GIFs. But should work for my intentions as well. I am using that structure on my own images and running into memory issues now.

My previous understanding was that the training of a network is the heavy lifting and the eval mode ‘just’ classifies based on the acquired weights. In my case, after each Iteration of the image pair loop, my 12GBs of RAM are filled further, ending with a SIGKILL after just 3-4 image pairs. (the same happens if i do batches of 2, just faster; or if i use raft_small, just slower).

What i dont understand:
I use pretrained weights and the eval mode. Shouldn’t the RAM clear after each ‘classification’/call of the model() function?

The results and everything else looks fine. Just the RAM gets fuller and fuller.