a = torch.randn(1000000, 1000, device=0)
# current gpu usage = 4383M
b = a.cpu()
# current gpu usage is still = 4383M
I’d like to free gpu memory(
a) after convert the tensor to cpu.
What I’ve tried:
But it still occupies 4383M of gpu.
How can I do this?
I’m not an expert but you need to consider than when you call .cpu() you are making a copy in RAM, but It doesn’t imply you are removing the gpu version.
You can delete the GPU variable to achieve that. The memory will be available but it is not overwritten until new variables use that space. So in short memory manager is aware it can use those memory addresses, but doesn’t delete them as it would require time and can just overwrite when it is necessary.
Anyway this is just what I understood from other people’s explanations.