How to swap data between cpu and gpu

How to swap part of a model data including the parameters tensor and intermediates results tensor to host memory(cpu) while free the GPU memory they occupied in pytorch?

Why is .cpu() method not sufficient?

From the docs, .cpu() returns a copy of gpu data, does it free the occupied GPU memory?

If you want to release the GPU memory, use torch.cuda.empty_cache().
I found a topic about how to release a tensor here, but I’m not sure it will work…

thanks, seems to be a solution.

Note that this approach will free the GPU memory indeed, so that other applications can use it.
However, if you want to use the memory again in PyTorch, it would have to be reallocated, which might slow down your code. To avoid it, PyTorch uses a custom caching mechanism, so that the cached GPU memory can easily be reused without any allocations.

1 Like