def save_model:
net = self.my_net.module
dummy_input = torch.randn(1,3,112,112).cpu()
net.eval()
net.cpu() # switch to cpu to save the model to avoid some issues on Pytorch 1.1.0
script_module = torch.jit.trace(net,dummy_input)
script_module.save('my_model.pt')
net.cuda() # switch to cuda to continue training on GPU
net.train()
The memory usage keeps increasing after each call to save_model, in other words, memory is being leaked.
May anyone help to me to understand this issue? Thank you.