Memory leak on CPU.How to release explicitly?

Hi all.
Is there a way like torch.cuda.empty_cache() to release CPU memory explicitly? I meet memory leak on CPU.
My code is like:

def func(model):
    torch.jit.trace(model)
    out = model(input)

for i in range(10):
     model = resnet()
     func(model)
     printsomething

Every iteration, run at printsomething,the memory is increase about 2G. The next iteration, when torch.jit.trace is executed, the memory will be released.

A simpler code snippet is:

def sub_fun1():
    from torchvision.models.resnet import resnet50
    model = resnet50(num_classes=10)
    del model

def sub_fun2():
    from torchvision.models.resnet import resnet50
    model = resnet50(num_classes=10)
    model = model.cuda()
    del model

@profile(precision=4)
def main():
    sub_fun1()
    sub_fun2()
    gc.collect()

The memory usage is like:

Line #    Mem usage    Increment  Occurrences   Line Contents
=============================================================
    61 215.7070 MiB 215.7070 MiB           1   @profile(precision=4)
    62                                         def main():
    63 237.2070 MiB  21.5000 MiB           1       sub_fun1()
    64 2988.8672 MiB 2751.6602 MiB           1       sub_fun2()
    65 2988.8672 MiB   0.0000 MiB           1       gc.collect()

So the memory is not released after sub_fun2 is called.Why this happen?