Validation accuracy drops after loading model from checkpoint

I use this code to save my VAE model every epoch:

torch.save({'epoch' : epoch,
            'encoder' : encoder.state_dict(),
            'decoder' : decoder.state_dict(),
            'property_predictor_model' : property_predictor_model.state_dict(),
            'train_ids' : train_ids,
            'valid_ids' : valid_ids,
            'validationQuality' : validationQuality
}, '{}/model_checkpoint{}.pt'.format(out_dir, epoch))

And when I restart training, I use this code to initialize the model and to load the previous state:

model_encode = VAE_encode(**encoder_parameter).to(device)
model_decode = VAE_decode(**decoder_parameter).to(device)
model_prop_predict = property_predictor_model(**prop_pred_parameter).to(device)

model_encode.load_state_dict(checkpoint['encoder'])
model_decode.load_state_dict(checkpoint['decoder'])
model_prop_predict.load_state_dict(checkpoint['property_predictor_model'])

The Validation accuracy saved in the checkpoint itself would be around 60% yet running validation immediately after loading the model returns an accuracy of <10%. Training continues as if I just initialized the model randomly. The Validation accuracy before and after i load the state is different, so it’s loading something, just not the right thing. Any ideas why this is happening.

Interesting note: If I use my debugger and load the checkpoint right after it gets saved, the validation accuracy is maintained, but if I cancel my script and load up the same checkpoint, accuracy is <10% again.