Clear all the GPU memory used by pytorch in current python code without exiting python

I am running a modified version of a third-party code which uses pytorch and GPU. I run the same model multiple times by varying the configs, which I am doing within python i.e. I have a wrapper python file which calls the model with different configs. But I am getting out-of-memory errors while running the second or third model. That is to say, the model can run once properly without any memory issues. So, if I end the code after running the first model and then start the second model afresh, the code works fine. However, if I chain the models within python, I’m running into out-of-memory issues.

I suspect there are some memory leaks within the third-party code. On googling, I found two suggestions. One is to call torch.cuda.empty_cache(), and the other is to delete the tensors explicitly using del tensor_name. However, empty_cache() command isn’t helping free the entire memory, and the third-party code has too many tensors for me to delete all the tensors individually. Is there any way to clear the entire GPU memory used by the current python program within the python code itself?