How to make debug layers to be ignored during pre-trained weight assignment?

I modified VGG network by injecting debug-purpose layers that simply save the output of a layer. Clearly such layer does not have .weight. Now I am trying to load state weights to restart the model. Hover I get errors:
Missing key(s) in state_dict: "features.save_layer1_1.weight", "features.save_layer1_1.bias"...

How do I force pytorch to ignore such debug layers and load weights only to layers with weights?

Here is new VGG model

The way model is loaded now:

def load_vgg16():
    model = VGG16(num_classes=365)

    model.features = torch.nn.Sequential(collections.OrderedDict(zip([
        'conv1_1', 'relu1_1', 'save_layer1_1',
        'conv1_2', 'relu1_2', 'save_layer1_2',
        'pool1',
        'conv2_1', 'relu2_1', 'save_layer2_1',
        'conv2_2', 'relu2_2', 'save_layer2_2',
        'pool2',
        'conv3_1', 'relu3_1', 'save_layer3_1',
        'conv3_2', 'relu3_2', 'save_layer3_2',
        'conv3_3', 'relu3_3', 'save_layer3_3',
        'pool3',
        'conv4_1', 'relu4_1', 'save_layer4_1',
        'conv4_2', 'relu4_2', 'save_layer4_2',
        'conv4_3', 'relu4_3', 'save_layer4_3',
        'pool4',
        'conv5_1', 'relu5_1', 'save_layer5_1',
        'conv5_2', 'relu5_2', 'save_layer5_2',
        'conv5_3', 'relu5_3', 'save_layer5_3',
        'pool5'],
        model.features)))
    model.classifier = torch.nn.Sequential(collections.OrderedDict(zip([
        'fc6', 'relu6', 'save_layer6'
        'drop6',
        'fc7', 'relu7', 'save_layer7'
        'drop7',
        'fc8a'],
        model.classifier)))

    sd = torch.hub.load_state_dict_from_url(url) 
    model.load_state_dict(sd)
    model.eval()
    return model

‘save_layer*’ layers should be ignored during loading.

You could try to pass strict=False while loading the state_dict.

this problem seems to be solved for me by modifying the state dictionary’s _metadata