Error loading model

I fine-tuned the pre-training ResNet101 model and saved it. The code looks like this:
torch.save(model.state_dict(), save_point+‘model1.pkl’)
But when I reloaded, there was a problem.like this:
model_ft = ResNet()
myresnet = model_ft.load_state_dict(torch.load("./checkpoint/model1.pkl"))
eorr:
Traceback (most recent call last):
File “feature-map.py”, line 68, in
model_ft = ResNet()
NameError: name ‘ResNet’ is not defined

Why is this happening, what should I do?

How did you define the class ResNet?
Apparently the class definition is missing. You have to import it from the original file it was defined.

I used the pre-training model, so I didn’t define it. I didn’t find the source file, so I don’t know how the original file was defined. I printed the model and wrote ResNet.

OK, if you used the pretrained model, you can just load it in the same way as before and load your trained state_dict after it:

import torchvision.models as models

model_ft = models.resnet101(pretrained=False)
model_ft.load_state_dict(torch.load(PATH))

Thank you, I tried it. This problem has been bothering me for a long time.Since I was fine-tuning the pre-training model and saving it, I still got the following error:
self.class.name, “\n\t”.join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for ResNet:
While copying the parameter named “fc.weight”, whose dimensions in the model are torch.Size([1000, 2048]) and whose dimensions in the checkpoint are torch.Size([11, 2048]).
While copying the parameter named “fc.bias”, whose dimensions in the model are torch.Size([1000]) and whose dimensions in the checkpoint are torch.Size([11]).

You’ve changed apparently the architecture of the pretrained model.
Could you post the code for your training?
Basically, you have to define the model in the same way as you did while training.
E.g. if you changed the last linear layer to 11 classes, you would need to do the same again before loading the state_dict.

Thank you very much, here is the code for my training and fine-tuning the pre-training model, because I want to do the feature extraction of a single image, so I want to save the model.
######################################################################

Training the model

def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
since = time.time()

best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0

for epoch in range(num_epochs):
    print('Epoch {}/{}'.format(epoch, num_epochs - 1))
    print('-' * 10)

    # Each epoch has a training and validation phase
    for phase in ['train', 'val']:
        if phase == 'train':
            scheduler.step()
            model.train()  # Set model to training mode
        else:
            model.eval()   # Set model to evaluate mode

        running_loss = 0.0
        running_corrects = 0

        # Iterate over data.
        for inputs, labels in dataloaders[phase]:
            inputs = inputs.to(device)
            labels = labels.to(device)

            # zero the parameter gradients
            optimizer.zero_grad()

            # forward
            # track history if only in train
            with torch.set_grad_enabled(phase == 'train'):
                outputs = model(inputs)
                _, preds = torch.max(outputs, 1)
                loss = criterion(outputs, labels)

                # backward + optimize only if in training phase
                if phase == 'train':
                    loss.backward()
                    optimizer.step()
            # statistics
            running_loss += loss.item() * inputs.size(0)
            running_corrects += torch.sum(preds == labels.data)

        # deep copy the model
        if phase == 'val' and epoch_acc > best_acc:
            best_acc = epoch_acc
            best_model_wts = copy.deepcopy(model.state_dict())
            torch.save(model.state_dict(), 'model1.pkl')   ###save model

    print()

time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
    time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))

# load best model weights
model.load_state_dict(best_model_wts)
return model

######################################################################

Finetuning the convnet

model_ft = models.resnet101(pretrained=True)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, 11)

model_ft = model_ft.to(device)

criterion = nn.CrossEntropyLoss()

Observe that all parameters are being optimized

optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)

Decay LR by a factor of 0.1 every 7 epochs

exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)

######################################################################

Train and evaluate

model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler,
num_epochs=30)