My model trains without memory issues, but when being deployed, it has a memory leak which I cannot track down. I’m not sure whether it’s related to pytorch. But to help me narrow down the options, I would just like to know, for example, in a loop such as the following, is there potential for memory leak in pytorch? If so, can you suggest some possibilities?
while(True): model = SomePytorchModel() model.eval() for i in range(100): with torch.no_grad(): result = model()