CPU memory not cleared after "to(cudadevice)"

how to free the cpu memory

testcode:

import torch
from torchvision import models
import gc


@profile
def test():
    net = models.alexnet(pretrained=True)
    gpu = torch.device('cuda:0')
    net.to(gpu)
    net.cpu()
    del net
    aaa = [1,2,3,4]
    torch.cuda.empty_cache()
    bbb = [1,123,4,151]
    gc.collect()
    aa = 1
    
if __name__ is "__main__":
    print("main")
    test()

runcmd : python -m memory_profiler profile_try.py

Hi,

The only line were you could see a change here is between the net.to(gpu) and net.cpu() right?
The allocator on your system might not release the memory to the system right away to improve speed. And it looks like it does this as you can see that moving your model back to the cpu use less memory than when you created it at the beginning.