Memory leak during inference?

My model trains without memory issues, but when being deployed, it has a memory leak which I cannot track down. I’m not sure whether it’s related to pytorch. But to help me narrow down the options, I would just like to know, for example, in a loop such as the following, is there potential for memory leak in pytorch? If so, can you suggest some possibilities?

while(True): 
    model = SomePytorchModel()
    model.eval()
    for i in range(100): 
        with torch.no_grad(): 
            result = model()

Your code is a bit odd as it’s recreating the model in a loop, but this should not leak memory as the garbage collector should be cleaning the old variables after they were replaced by the new objects.