How to clear GPU memory after using model?

I’m trying to free up GPU memory after finishing using the model.

  • I checked the nvidia-smi before creating and trainning the model: 402MiB / 7973MiB
  • After creating and training the model, I checked again the GPU memory status with nvidia-smi: 7801MiB / 7973MiB
  • Now I tried to free up GPU memory with:
del model
torch.cuda.empty_cache() 
gc.collect()

and checked again the GPU memory: 2361MiB / 7973MiB

  • As you can see not all the GPU memory was released.
  • I can only relase the GPU memory via terminal (sudo fuser -v /dev/nvidia* and kill pid)

Is there a way to free up the GPU memory after I done using the model ?

Your code should generally work to free all allocated memory so I guess you might be missing to delete all references to all tensors stored on the GPU.
Once this is done, note that the CUDA context will still use memory to hold all the kernels which might take up to ~1GB depending on the device, CUDA, PyTorch version etc.