Should model be in train mode when saving it?

When saving a model like the following snip of code, does it then matter if the model is in train() or eval() mode? Does train() and eval() change the model.state_dict()? And if so, should model.train() be called before saving the model?

for epoch in range(5):
    # prepare model for training
    for i, (local_batch, local_labels) in enumerate(training_generator):

    # prepare model for evaluation
    for i, (local_batch, local_labels) in enumerate(validation_generator):
        ....,  os.path.join('models', f'model_{epoch+1:03}.pt'))    

No, it doesn’t. Calling model.train() or model.eval() will change the internal flag to True or False, respectively. This flag is then used in some modules (e.g. nn.Dropout and batchnorm layers) so switch the behavior in their forward method.

1 Like

Do you mean that I should put train() in this function?

def forward(self, x):
    return self.model(x)

No, you would call model.train() and model.eval() before starting the training and evaluation once.