Weird results with IntermediateLayerGetter to get intermediate features of Alexnet

I use IntermediateLayerGetter from torch_intermediate_layer_getter to get intermediate features of alexnet. But something weird happened.
When I remove the layer after fc6 layer and let it output the fc6 features directly, the results I obtained is quite different from the results using IntermediateLayerGetter to get fc6 features from pretrained alexnet.
the code looks like the following:

using IntermediateLayerGetter

alexnet = models.alexnet(pretrained=True)
alexnet.cuda()

return_layers = { ‘classifier.1’: ‘classifier.1’}
mid_getter = MidGetter(alexnet, return_layers=return_layers, keep_output=True)

Testlr_list =[]
Test_batchsize = 1
alexnet.eval()
with torch.no_grad():
for i in range(TestImageData.size()[0]//Test_batchsize):
local_X = TestImageData[i*Test_batchsize:(i+1)*Test_batchsize,:,:,:].float().cuda()
mid_outputs, model_output = mid_getter(local_X)
print(mid_outputs[‘classifier.1’].size())
Testlr_list.append(mid_outputs[‘classifier.1’].detach().cpu())
Testlr_data = torch.cat(Testlr_list)
print(Testlr_data.shape)

#remove layers after fc6 and output fc6 features directly
m = models.alexnet(pretrained=True)
m.classifier = nn.Sequential(*list(m.classifier.children())[:-5])
m.cuda()
Testlr_list =[]
Test_batchsize = 1
m.eval()
with torch.no_grad():
for i in range(TestImageData.size()[0]//Test_batchsize):
local_X = TestImageData[i*Test_batchsize:(i+1)*Test_batchsize,:,:,:].float().cuda()
out = m(local_X)
Testlr_list.append(out.detach().cpu())
Testlr_data = torch.cat(Testlr_list)
print(Testlr_data)

the results are quite different. Is there anything wrong with my code?

I don’t know how MidGetter is defined, but IntermediateLayerGetter doesn’t accept the keep_output argument. In any case, I think you are seeing a difference, since AlexNet is using inplace nn.ReLU modules, which would manipulate the output of the previous layer inplace.

Hi, thanks for your information.
MidGetter is defined as follows:
from torch_intermediate_layer_getter import IntermediateLayerGetter as MidGetter

So IntermediateLayerGetter is not good to extract intermediate features of alexnet? Or how to fix the problem? Do you have any other suggestions?

looking forward to hearing from you!

hi, ptrblck. Can I use forward hook function to extract the intermediate features of alexnet? But something weird happened again. This result is quite different from the other two. My code looks like this:

model = torchvision.models.alexnet(pretrained=True)
model.type(dtype)

activation = {}
def get_activation(name):
def hook(model, input, output):
activation[name] = output.detach()
return hook

model.classifier[1].register_forward_hook(get_activation(‘fc6’))

Testcomp_list =[]
Test_batchsize = 1
model.eval()
with torch.no_grad():
for i in range(TestImageData.size()[0]//Test_batchsize):
local_X = TestImageData[i*Test_batchsize:(i+1)*Test_batchsize,:,:,:].type(dtype)
out = model(local_X)
Testcomp_list.append(activation[‘fc6’].detach().cpu())
#h.remove()
print(len(Testcomp_list))
print(Testcomp_list[1].size())
Testcomp_data = torch.cat(Testcomp_list)
print(Testcomp_data.size())

Is there anything wrong again?