I am trying to free GPU cache without restarting jupyter kernel in a following way
del model
torch.cuda.empty_cache()
However, the memory is not freed. Could you tell me what I am doing wrong?
I am trying to free GPU cache without restarting jupyter kernel in a following way
del model
torch.cuda.empty_cache()
However, the memory is not freed. Could you tell me what I am doing wrong?
The memory will only be freed if you don’t have any reference to it left. It can come from your network inputs, your network outputs, your loss accumulation…
Hi, does torch has tensor reference counter?
The Tensor objects are ref counted yes. It is done on the c++ side though via a shared_ptr-like system.