Loss. backward() takes longer and longer

At the begining of the training, the code “loss.backward()” only cost about 40s(I have a big model about optical system), but as the training goes on, “loss.backward()” 's running time is longer and longer.In my mind, during the training different epoch should have the same time, there must be something wrong, but I can’t find it.So where the problem may occur?

Check if you are extending the computation graph and if the backward call then computes gradients for multiple iterations. Also, how did you narrow down the slowdown is coming from the backward call?