Model resume resulting to high train loss but same inference loss

applying the model resume as explained here led to the same inference error. however, the train resume showed a way higher loss than the loss at the time of model saving. it is worth mentioning that LR was decreased by 10% each time the validation loss was improved. Which means, no lr scheduler was defined
thank you

Is the learning rate schedule resumed at the same point when training is resumed?

Thank you for your reply.
I have not defined lr schedule. I just multiply it by 0.9 every time the validation error decreases.

In that case you might want to store the learning rate (e.g., in something like the state dict) so that the same learning rate is used when training is resumed. As a hack to test if restoring the learning rate when training is stopped helps, you could first try something like adding it to the checkpoint name and then setting this rate manually when training is resumed.

Exactly, I assigned the same lr value to optimizer_pd.param_groups[0][‘lr’] and for safety, to optimizer_pd.defaults[‘lr’]. still, I get exactly the same inference loss value but 10 times worst training loss.
kind regards