Suppose that I create a tensor and put it on gpu, then I don’t need it and want to free gpu memory allocated by it. How to do that?
import torch a=torch.randn(3,4).cuda() # nvidia-smi shows that some mem has been allocated. # do something # a does not exist and nvidia-smi shows that mem has been freed.
I have tried:
- del a
- del a; torch.cuda.empty_cache()
But none of them works.