I use IntermediateLayerGetter from torch_intermediate_layer_getter to get intermediate features of alexnet. But something weird happened.
When I remove the layer after fc6 layer and let it output the fc6 features directly, the results I obtained is quite different from the results using IntermediateLayerGetter to get fc6 features from pretrained alexnet.
the code looks like the following:
Testlr_list =[]
Test_batchsize = 1
alexnet.eval()
with torch.no_grad():
for i in range(TestImageData.size()[0]//Test_batchsize):
local_X = TestImageData[i*Test_batchsize:(i+1)*Test_batchsize,:,:,:].float().cuda()
mid_outputs, model_output = mid_getter(local_X)
print(mid_outputs[‘classifier.1’].size())
Testlr_list.append(mid_outputs[‘classifier.1’].detach().cpu())
Testlr_data = torch.cat(Testlr_list)
print(Testlr_data.shape)
#remove layers after fc6 and output fc6 features directly
m = models.alexnet(pretrained=True)
m.classifier = nn.Sequential(*list(m.classifier.children())[:-5])
m.cuda()
Testlr_list =[]
Test_batchsize = 1
m.eval()
with torch.no_grad():
for i in range(TestImageData.size()[0]//Test_batchsize):
local_X = TestImageData[i*Test_batchsize:(i+1)*Test_batchsize,:,:,:].float().cuda()
out = m(local_X)
Testlr_list.append(out.detach().cpu())
Testlr_data = torch.cat(Testlr_list)
print(Testlr_data)
the results are quite different. Is there anything wrong with my code?
I don’t know how MidGetter is defined, but IntermediateLayerGetter doesn’t accept the keep_output argument. In any case, I think you are seeing a difference, since AlexNet is using inplace nn.ReLU modules, which would manipulate the output of the previous layer inplace.
hi, ptrblck. Can I use forward hook function to extract the intermediate features of alexnet? But something weird happened again. This result is quite different from the other two. My code looks like this:
model = torchvision.models.alexnet(pretrained=True)
model.type(dtype)