Memory leak - Transformer model


I am loading one of the transformer models using (from_pretrained).
I am only using this model for inference. After getting results from this model I delete the model by: del model

I only use CPU for this process. I noticed that when the process is done and the model is deleted the memory usage decreases but still 1GB of memory is being used and is not freed.
Is there any step that I am missing after using this model during test?
I appreciate your help regarding this.

Thank you very much

Hi I am wondering if anyone could help me with this issue.

Thank you very much in advance!

I don’t know what the expected memory usage in PyTorch on the host would be at the moment, but did you measure it after just importing PyTorch? I would expect the import to preload some data as well as 3rd party libs (e.g. MKL), which would take some space (but it might be way lower than 1GB).
In case you are sure the memory is used by your model, you could check if any outputs, optimizers etc. are still stored, which could reference the parameters and/or activations.