Memory leaks when a model is transferred from gpu to cpu and save

def save_model:
        net = self.my_net.module
        dummy_input = torch.randn(1,3,112,112).cpu()
        net.cpu() # switch to cpu to save the model to avoid some issues on Pytorch 1.1.0
        script_module = torch.jit.trace(net,dummy_input)'')
        net.cuda() # switch to cuda to continue training on GPU

The memory usage keeps increasing after each call to save_model, in other words, memory is being leaked.

May anyone help to me to understand this issue? Thank you.

Memory on the cpu or gpu side?
How do you measure this?
Could you provide a small script to reproduce this?