Seed effect on model training

Hello, I am training a binary classification model and I want my results to be reproducible. So, I am using this function before every training session begins:

def seed_everything(seed_value, use_cuda):
if use_cuda:
torch.cuda.manual_seed(seed_value)
torch.cuda.manual_seed_all(seed_value)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False

What I am asking is, should I seed_everything even if I loaded a model using last model.state_dict() from a previous session?

I am asking this because for example if I trained for 30 epochs and saved model weights at epoch 20, then I start from epoch 20 again, the results of epoch 20 to 30 are not the same as the last training session.

This might be caused by the shuffling of the data, if you haven’t restored the state of epoch 20 for the data loading before continuing the training.
If you are shuffling the data, you could try to add 20 “empty” epoch iterations using the DataLoader and check the final performance again.