Save entire model in snapshot or not

Hi, I’m just wondering, whether is it possible to save entire model in snapshot or not. As, I want to resume the training, I thought it would be good to save the entire model and load.

As of now, I’m doing the below approach

    def _save_snapshot(self, epoch):
        snapshot = {
            "MODEL_STATE": self.model.module.state_dict(),
            "EPOCHS_RUN": epoch,
        }
        torch.save(snapshot, self.snapshot_path)
        print(f"Epoch {epoch} | Training snapshot saved at {self.snapshot_path}")

But, I’m expecting something like save the entire model, along with epoch count, optimizer, and loss.

and I tired the below approach and just want to check whether this approach is correct or is there any efficient method?

torch.save({
            'epoch': 12,
            'model': model,
            'loss': 'checkloss',
            }, './checking_/folder3/testcheck.pth')

snap = torch.load('./checking_/folder3/testcheck.pth')
new_model = snap['model']

can you just confirm me and guide me here?

@ptrblck

I would not recommend using the second approach as it can easily break since you would need to recreate the same code structure when loading the model.
Saving the state_dicts of the model and optimizer is the recommended approach as also mentioned in the docs.