Clear GPU memory between successive runs (inference)

Hello,

Let us assume the following runs:

python inference.py -testset A

python inference.py -testset B

python inference.py -testset C

In other words, it is the inference phase, the best model has already been saved and I am just evaluating with different test sets. Is it advisable to clear the GPU memory between two runs? Or every new run is a “fresh” start in terms of GPU memory?

Not an expert about this but in principle pytorch manages the memory efficiently.
Moreover if you run different python scripts, the memory should be 100% released after each process finishes. Maybe if you run daemon subprocesses in c++ libraries from python, i.e., openpose or DALI, these processes can get stuck and not release the memory. But that’s more an exception.

Thanks Juan. I added all calls/runs into a single shell script.