The time of training in each epoch increase with no stop

Hello, I use GPU (cuda 11.1) to train a massive IA model based on pytorch, but while training the time of every epoch is contantly increasing (starting by 20 s on going up) I do not why, can you help please?
here is the training loop.

Could you try to narrow down a minimal code snippet and post it with random input tensors so that we could try to reproduce this issue?
Also, in case you are seeing an increased memory usage besides the slowdown, I guess you are storing the computation graph (accidentally) in each iteration.
You can also post code snippets by wrapping them into three backticks ```, which makes debugging easier.