KeyError when accessing activation from pytorch register_forward_hook

I am trying to save the output from the fc1 layer of the model. Below is my model formulation:

model_ft = models.resnet50(pretrained=True)
num_ftrs = model_ft.fc.out_features
model_ft.fc1 = nn.ReLU(nn.Linear(num_ftrs, 512))
model_ft.fc2 = nn.Linear(num_ftrs, num_classes)

model_ft = model_ft.to(device)

criterion = nn.CrossEntropyLoss()
optimizer_ft = optim.SGD(model_ft.parameters(), lr=learning_rate, momentum=0.9)
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)

The below code shows the current activation access method:

activation = {}
def get_activation(name):
    def hook(model_ft, input, output):
        activation[name] = output.detach()
    return hook

model_ft.fc1.register_forward_hook(get_activation('fc1'))

x = torch.randn(12,3,224,224).to('cuda')
output = model_ft(x)
block4_output = activation['fc1']

The above method throws KeyError: ‘fc1’. Could anyone help me to fix this problem?

Thank You!!

model_ft.fc1 and model_ft.fc2 are new attributes, which are never used in the original forward method.
If you want to replace the original model_ft.fc with these two new layers, assign them to the same attribute and wrap them into an nn.Sequential container:

model_ft = models.resnet50(pretrained=False)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Sequential(
    nn.ReLU(),
    nn.Linear(num_ftrs, 10)
)

activation = {}
def get_activation(name):
    def hook(model_ft, input, output):
        activation[name] = output.detach()
    return hook

model_ft.fc.register_forward_hook(get_activation('fc'))

x = torch.randn(12,3,224,224)
output = model_ft(x)
block4_output = activation['fc']

Your usage of out_features is also wrong since you are using it as the in_features. Also, in case you want to use different "branches" in your model, you should override the forward` method and create a custom model.