Re-train a pre-trained model

I have a pre-trained model for speech enhancement. I have already uploaded it and resumed the training from this checkpoint (i.e. set the model to model. train ()) on the same dataset. But while resuming the training I didn’t update its loss i.e. no loss backward, because I think it already sees the data before and doesn’t need to update its parameter. The problem is that I see its loss starts to diverge. Is that normal?

I’m not sure what “loss starts to diverge” means, but note that model.train() will e.g. update the running stats of batchnorm layers and will also enable dropout, which might change the model performance during deployment.

I mean that the loss value increases, not decreases.