Problem loading pretrained weights - missing keys

Hi, I’m trying to load pretrained weights of 3d resnet34, the model is from here:
https://github.com/kenshohara/3D-ResNets-PyTorch/blob/master/models/resnet.py

the weights are from here:
https://drive.google.com/drive/folders/1zvl89AgFAApbH0At-gMuZSeQB_LpNP-M

My code is :

def resnet34(feature_size, frame_size, frames_sequence):
    """Constructs a ResNet-34 model.
    """

    path="/mypath/resnet-34-kinetics.pth"
    pretrain = torch.load(path)
    model = ResNet(BasicBlock, [3, 4, 6, 3], frame_size,frames_sequence,feature_size)
    model.load_state_dict(pretrain['state_dict'])

And I’m getting That all the weights are missing, The file size is as expected, so the file is probably not corrupted, am I doing something wrong in the “loading” code?

Thanks!

Could you post the error message? Maybe there is just a minor naming mismatch between the state_dict and your model definition?

sure!

Namespace(annotation_path=’/mypath/3D-ResNets-PyTorch/annotation_dir_path/hmdb51_1.json’, arch=‘mobilenet-1’, batch_size=16, begin_epoch=1, checkpoint=10, cnn_dim=‘3D’, crop_position_in_test=‘c’, dampening=0.9, dataset=‘hmdb51’, feature_size=400, feature_size_ds=256, frame_size=224, frames_sequence=16, ft_begin_index=0, initial_scale=1.0, learning_rate=0.11, lr_patience=10, manual_seed=1, mean=[114.7748, 107.7354, 99.475], mean_dataset=‘activitynet’, model=‘mobilenet’, model_depth=1, momentum=0.9, n_classes=51, n_epochs=300, n_finetune_classes=51, n_scales=5, n_threads=4, n_val_samples=3, n_warmup_steps=4000, nesterov=False, no_cuda=False, no_hflip=False, no_mean_norm=False, no_softmax_in_test=False, no_train=False, no_val=False, norm_value=1, number_gpu=2, optimizer=‘sgd’, pretrain_path=’’, result_path=’/mypath/3D-ResNets-PyTorch/results’, resume_path=’’, root_path=’/mypath/3D-ResNets-PyTorch’, scale_in_test=1.0, scale_step=0.84089641525, scales=[1.0, 0.84089641525, 0.7071067811803005, 0.5946035574934808, 0.4999999999911653], std=[38.7568578, 37.88248729, 40.02898126], std_norm=False, test=False, test_subset=‘val’, train_crop=‘corner’, video_path=’/mypath/3D-ResNets-PyTorch/hmdb51_jpg’, weight_decay=0.001)
3D CNN have been selected
/mypath/3D-ResNets-PyTorch/models/resnet.py:145: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_.
m.weight = nn.init.kaiming_normal(m.weight, mode=‘fan_out’)
Traceback (most recent call last):
File “main_w3d.py”, line 49, in
model, parameters = model(opt)
File “/mypath/3D-ResNets-PyTorch/model_w3d.py”, line 77, in model
model = BNet(opt)
File “/mypath/3D-ResNets-PyTorch/model_w3d.py”, line 42, in init
self.feature_size = resnet3d.resnet34(feature_size=opt.feature_size, frame_size=opt.frame_size,frames_sequence=opt.frames_sequence)
File “/mypath/3D-ResNets-PyTorch/models/resnet.py”, line 235, in resnet34
model.load_state_dict(pretrain[‘state_dict’])
File “/mypath/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 719, in load_state_dict
self.class.name, “\n\t”.join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for ResNet:
Missing key(s) in state_dict: “conv1.weight”, “bn1.weight”, “bn1.bias”, “bn1.running_mean”, “bn1.running_var”, “layer1.0.conv1.weight”, “layer1.0.bn1.weight”, “layer1.0.bn1.bias”, “layer1.0.bn1.running_mean”, “layer1.0.bn1.running_var”, “layer1.0.conv2.weight”, “layer1.0.bn2.weight”, “layer1.0.bn2.bias”, “layer1.0.bn2.running_mean”, “layer1.0.bn2.running_var”, “layer1.1.conv1.weight”, “layer1.1.bn1.weight”, “layer1.1.bn1.bias”, “layer1.1.bn1.running_mean”, “layer1.1.bn1.running_var”, “layer1.1.conv2.weight”, “layer1.1.bn2.weight”, “layer1.1.bn2.bias”, “layer1.1.bn2.running_mean”, “layer1.1.bn2.running_var”, “layer1.2.conv1.weight”, “layer1.2.bn1.weight”, “layer1.2.bn1.bias”, “layer1.2.bn1.running_mean”, “layer1.2.bn1.running_var”, “layer1.2.conv2.weight”, “layer1.2.bn2.weight”, “layer1.2.bn2.bias”, “layer1.2.bn2.running_mean”, “layer1.2.bn2.running_var”, “layer2.0.conv1.weight”, “layer2.0.bn1.weight”, “layer2.0.bn1.bias”, “layer2.0.bn1.running_mean”, “layer2.0.bn1.running_var”, “layer2.0.conv2.weight”, “layer2.0.bn2.weight”, “layer2.0.bn2.bias”, “layer2.0.bn2.running_mean”, “layer2.0.bn2.running_var”, “layer2.0.downsample.0.weight”, “layer2.0.downsample.1.weight”, “layer2.0.downsample.1.bias”, “layer2.0.downsample.1.running_mean”, “layer2.0.downsample.1.running_var”, “layer2.1.conv1.weight”, “layer2.1.bn1.weight”, “layer2.1.bn1.bias”, “layer2.1.bn1.running_mean”, “layer2.1.bn1.running_var”, “layer2.1.conv2.weight”, “layer2.1.bn2.weight”, “layer2.1.bn2.bias”, “layer2.1.bn2.running_mean”, “layer2.1.bn2.running_var”, “layer2.2.conv1.weight”, “layer2.2.bn1.weight”, “layer2.2.bn1.bias”, “layer2.2.bn1.running_mean”, “layer2.2.bn1.running_var”, “layer2.2.conv2.weight”, “layer2.2.bn2.weight”, “layer2.2.bn2.bias”, “layer2.2.bn2.running_mean”, “layer2.2.bn2.running_var”, “layer2.3.conv1.weight”, “layer2.3.bn1.weight”, “layer2.3.bn1.bias”, “layer2.3.bn1.running_mean”, “layer2.3.bn1.running_var”, “layer2.3.conv2.weight”, “layer2.3.bn2.weight”, “layer2.3.bn2.bias”, “layer2.3.bn2.running_mean”, “layer2.3.bn2.running_var”, “layer3.0.conv1.weight”, “layer3.0.bn1.weight”, “layer3.0.bn1.bias”, “layer3.0.bn1.running_mean”, “layer3.0.bn1.running_var”, “layer3.0.conv2.weight”, “layer3.0.bn2.weight”, “layer3.0.bn2.bias”, “layer3.0.bn2.running_mean”, “layer3.0.bn2.running_var”, “layer3.0.downsample.0.weight”, “layer3.0.downsample.1.weight”, “layer3.0.downsample.1.bias”, “layer3.0.downsample.1.running_mean”, “layer3.0.downsample.1.running_var”, “layer3.1.conv1.weight”, “layer3.1.bn1.weight”, “layer3.1.bn1.bias”, “layer3.1.bn1.running_mean”, “layer3.1.bn1.running_var”, “layer3.1.conv2.weight”, “layer3.1.bn2.weight”, “layer3.1.bn2.bias”, “layer3.1.bn2.running_mean”, “layer3.1.bn2.running_var”, “layer3.2.conv1.weight”, “layer3.2.bn1.weight”, “layer3.2.bn1.bias”, “layer3.2.bn1.running_mean”, “layer3.2.bn1.running_var”, “layer3.2.conv2.weight”, “layer3.2.bn2.weight”, “layer3.2.bn2.bias”, “layer3.2.bn2.running_mean”, “layer3.2.bn2.running_var”, “layer3.3.conv1.weight”, “layer3.3.bn1.weight”, “layer3.3.bn1.bias”, “layer3.3.bn1.running_mean”, “layer3.3.bn1.running_var”, “layer3.3.conv2.weight”, “layer3.3.bn2.weight”, “layer3.3.bn2.bias”, “layer3.3.bn2.running_mean”, “layer3.3.bn2.running_var”, “layer3.4.conv1.weight”, “layer3.4.bn1.weight”, “layer3.4.bn1.bias”, “layer3.4.bn1.running_mean”, “layer3.4.bn1.running_var”, “layer3.4.conv2.weight”, “layer3.4.bn2.weight”, “layer3.4.bn2.bias”, “layer3.4.bn2.running_mean”, “layer3.4.bn2.running_var”, “layer3.5.conv1.weight”, “layer3.5.bn1.weight”, “layer3.5.bn1.bias”, “layer3.5.bn1.running_mean”, “layer3.5.bn1.running_var”, “layer3.5.conv2.weight”, “layer3.5.bn2.weight”, “layer3.5.bn2.bias”, “layer3.5.bn2.running_mean”, “layer3.5.bn2.running_var”, “layer4.0.conv1.weight”, “layer4.0.bn1.weight”, “layer4.0.bn1.bias”, “layer4.0.bn1.running_mean”, “layer4.0.bn1.running_var”, “layer4.0.conv2.weight”, “layer4.0.bn2.weight”, “layer4.0.bn2.bias”, “layer4.0.bn2.running_mean”, “layer4.0.bn2.running_var”, “layer4.0.downsample.0.weight”, “layer4.0.downsample.1.weight”, “layer4.0.downsample.1.bias”, “layer4.0.downsample.1.running_mean”, “layer4.0.downsample.1.running_var”, “layer4.1.conv1.weight”, “layer4.1.bn1.weight”, “layer4.1.bn1.bias”, “layer4.1.bn1.running_mean”, “layer4.1.bn1.running_var”, “layer4.1.conv2.weight”, “layer4.1.bn2.weight”, “layer4.1.bn2.bias”, “layer4.1.bn2.running_mean”, “layer4.1.bn2.running_var”, “layer4.2.conv1.weight”, “layer4.2.bn1.weight”, “layer4.2.bn1.bias”, “layer4.2.bn1.running_mean”, “layer4.2.bn1.running_var”, “layer4.2.conv2.weight”, “layer4.2.bn2.weight”, “layer4.2.bn2.bias”, “layer4.2.bn2.running_mean”, “layer4.2.bn2.running_var”, “fc.weight”, “fc.bias”.
Unexpected key(s) in state_dict: “module.conv1.weight”, “module.bn1.weight”, “module.bn1.bias”, “module.bn1.running_mean”, “module.bn1.running_var”, “module.layer1.0.conv1.weight”, “module.layer1.0.bn1.weight”, “module.layer1.0.bn1.bias”, “module.layer1.0.bn1.running_mean”, “module.layer1.0.bn1.running_var”, “module.layer1.0.conv2.weight”, “module.layer1.0.bn2.weight”, “module.layer1.0.bn2.bias”, “module.layer1.0.bn2.running_mean”, “module.layer1.0.bn2.running_var”, “module.layer1.1.conv1.weight”, “module.layer1.1.bn1.weight”, “module.layer1.1.bn1.bias”, “module.layer1.1.bn1.running_mean”, “module.layer1.1.bn1.running_var”, “module.layer1.1.conv2.weight”, “module.layer1.1.bn2.weight”, “module.layer1.1.bn2.bias”, “module.layer1.1.bn2.running_mean”, “module.layer1.1.bn2.running_var”, “module.layer1.2.conv1.weight”, “module.layer1.2.bn1.weight”, “module.layer1.2.bn1.bias”, “module.layer1.2.bn1.running_mean”, “module.layer1.2.bn1.running_var”, “module.layer1.2.conv2.weight”, “module.layer1.2.bn2.weight”, “module.layer1.2.bn2.bias”, “module.layer1.2.bn2.running_mean”, “module.layer1.2.bn2.running_var”, “module.layer2.0.conv1.weight”, “module.layer2.0.bn1.weight”, “module.layer2.0.bn1.bias”, “module.layer2.0.bn1.running_mean”, “module.layer2.0.bn1.running_var”, “module.layer2.0.conv2.weight”, “module.layer2.0.bn2.weight”, “module.layer2.0.bn2.bias”, “module.layer2.0.bn2.running_mean”, “module.layer2.0.bn2.running_var”, “module.layer2.1.conv1.weight”, “module.layer2.1.bn1.weight”, “module.layer2.1.bn1.bias”, “module.layer2.1.bn1.running_mean”, “module.layer2.1.bn1.running_var”, “module.layer2.1.conv2.weight”, “module.layer2.1.bn2.weight”, “module.layer2.1.bn2.bias”, “module.layer2.1.bn2.running_mean”, “module.layer2.1.bn2.running_var”, “module.layer2.2.conv1.weight”, “module.layer2.2.bn1.weight”, “module.layer2.2.bn1.bias”, “module.layer2.2.bn1.running_mean”, “module.layer2.2.bn1.running_var”, “module.layer2.2.conv2.weight”, “module.layer2.2.bn2.weight”, “module.layer2.2.bn2.bias”, “module.layer2.2.bn2.running_mean”, “module.layer2.2.bn2.running_var”, “module.layer2.3.conv1.weight”, “module.layer2.3.bn1.weight”, “module.layer2.3.bn1.bias”, “module.layer2.3.bn1.running_mean”, “module.layer2.3.bn1.running_var”, “module.layer2.3.conv2.weight”, “module.layer2.3.bn2.weight”, “module.layer2.3.bn2.bias”, “module.layer2.3.bn2.running_mean”, “module.layer2.3.bn2.running_var”, “module.layer3.0.conv1.weight”, “module.layer3.0.bn1.weight”, “module.layer3.0.bn1.bias”, “module.layer3.0.bn1.running_mean”, “module.layer3.0.bn1.running_var”, “module.layer3.0.conv2.weight”, “module.layer3.0.bn2.weight”, “module.layer3.0.bn2.bias”, “module.layer3.0.bn2.running_mean”, “module.layer3.0.bn2.running_var”, “module.layer3.1.conv1.weight”, “module.layer3.1.bn1.weight”, “module.layer3.1.bn1.bias”, “module.layer3.1.bn1.running_mean”, “module.layer3.1.bn1.running_var”, “module.layer3.1.conv2.weight”, “module.layer3.1.bn2.weight”, “module.layer3.1.bn2.bias”, “module.layer3.1.bn2.running_mean”, “module.layer3.1.bn2.running_var”, “module.layer3.2.conv1.weight”, “module.layer3.2.bn1.weight”, “module.layer3.2.bn1.bias”, “module.layer3.2.bn1.running_mean”, “module.layer3.2.bn1.running_var”, “module.layer3.2.conv2.weight”, “module.layer3.2.bn2.weight”, “module.layer3.2.bn2.bias”, “module.layer3.2.bn2.running_mean”, “module.layer3.2.bn2.running_var”, “module.layer3.3.conv1.weight”, “module.layer3.3.bn1.weight”, “module.layer3.3.bn1.bias”, “module.layer3.3.bn1.running_mean”, “module.layer3.3.bn1.running_var”, “module.layer3.3.conv2.weight”, “module.layer3.3.bn2.weight”, “module.layer3.3.bn2.bias”, “module.layer3.3.bn2.running_mean”, “module.layer3.3.bn2.running_var”, “module.layer3.4.conv1.weight”, “module.layer3.4.bn1.weight”, “module.layer3.4.bn1.bias”, “module.layer3.4.bn1.running_mean”, “module.layer3.4.bn1.running_var”, “module.layer3.4.conv2.weight”, “module.layer3.4.bn2.weight”, “module.layer3.4.bn2.bias”, “module.layer3.4.bn2.running_mean”, “module.layer3.4.bn2.running_var”, “module.layer3.5.conv1.weight”, “module.layer3.5.bn1.weight”, “module.layer3.5.bn1.bias”, “module.layer3.5.bn1.running_mean”, “module.layer3.5.bn1.running_var”, “module.layer3.5.conv2.weight”, “module.layer3.5.bn2.weight”, “module.layer3.5.bn2.bias”, “module.layer3.5.bn2.running_mean”, “module.layer3.5.bn2.running_var”, “module.layer4.0.conv1.weight”, “module.layer4.0.bn1.weight”, “module.layer4.0.bn1.bias”, “module.layer4.0.bn1.running_mean”, “module.layer4.0.bn1.running_var”, “module.layer4.0.conv2.weight”, “module.layer4.0.bn2.weight”, “module.layer4.0.bn2.bias”, “module.layer4.0.bn2.running_mean”, “module.layer4.0.bn2.running_var”, “module.layer4.1.conv1.weight”, “module.layer4.1.bn1.weight”, “module.layer4.1.bn1.bias”, “module.layer4.1.bn1.running_mean”, “module.layer4.1.bn1.running_var”, “module.layer4.1.conv2.weight”, “module.layer4.1.bn2.weight”, “module.layer4.1.bn2.bias”, “module.layer4.1.bn2.running_mean”, “module.layer4.1.bn2.running_var”, “module.layer4.2.conv1.weight”, “module.layer4.2.bn1.weight”, “module.layer4.2.bn1.bias”, “module.layer4.2.bn1.running_mean”, “module.layer4.2.bn1.running_var”, “module.layer4.2.conv2.weight”, “module.layer4.2.bn2.weight”, “module.layer4.2.bn2.bias”, “module.layer4.2.bn2.running_mean”, “module.layer4.2.bn2.running_var”, “module.fc.weight”, “module.fc.bias”.

Thanks a lot!

Edit: now I’m seeing that the parameters inside the pretrained weight are with “module” hiererchy before the layer hiererchy, is there anyway to solve that?

You probably saved the model as a nn.DataParallel module. Have a look at this thread.

1 Like

Thanks a lot, I have a hard time to understand what does " n.DataParallel temporarily" means.
I’m using it after I’m calling my “model” but how can I add it temporarily for loading purposes?
Giving that this is my data loader (If I understood you right) :

train_loader = torch.utils.data.DataLoader(
training_data,
batch_size=opt.batch_size,
shuffle=True,
num_workers=opt.n_threads,
pin_memory=True,
drop_last=True)

I succeeded to solve the problem using a new dict, but it’s still interesting me…