Hello, I want to create a cnn explainability class using create_feature_extractor()
functionality. The problem I am facing is that I cannot reach certain layers in the models, for example (features.7.0 is the last convolutional layer before avg pool):
from torchvision import models
from torchvision.models.feature_extraction import create_feature_extractor, get_graph_node_names
model = models.efficientnet_v2_s(weights=models.EfficientNet_V2_S_Weights.DEFAULT)
model.eval()
return_nodes = ['features.7.0', 'classifier.1']
feature_extractor = create_feature_extractor(model, return_nodes=return_nodes)
This piece of code produces an error:
ValueError: node: 'features.7.0' is not present in model. Hint: use `get_graph_node_names` to make sure the `return_nodes` you specified are present. It may even be that you need to specify `train_return_nodes` and `eval_return_nodes` separately.
As the error points out I ran the get_graph_node_names(model)
and I could not find this layer, even though I can do this model.features[7][0]
, which gives me the last convolution.
If I pass the the return nodes as return_nodes = ['features.7', 'classifier.1']
(i.e. without the .0 part) the feature extractor works, but the problem is that the features.7 is a Conv2dNormActivation which consists of 1 convolution, 1 batch norm and 1 activation function (as seen in the picture)
Is there a possible solution for this problem so that I could get the output of this convolution operation from the feature etractor functionality?