Backward memory leak

I am currently using the torch.Tensor.backward(retain_graph=True, create_graph=True) method and I have a memory leak.

According to the docs, this can be expected, and in order to avoid this, the .grad field of the parameters should be reset. I am performing this with the following snippet:

for param in model.parameters():
    param.grad = None

however, I am still getting the memory leak. Am I missing something to be reset?

Note that I cannot use the alternative torch.autograd.grad() function, because I have registered some backward hooks, that would not otherwise be called.