All zero output after pretrained VGG network's features

I am using the pretrained VGG network to implement Fast-RCNN. As mentioned in the paper, i divided the VGG into two parts, the feature and the classifier, Here’s my code.

def extract_vgg16(model):
    """
    First, the last max pooling layer is replaced by a RoI
    pooling layer that is configured by setting H and W to be
    compatible with the net’s first fully connected layer (e.g.,
    H = W = 7 for VGG16).
    
    Second, the network’s last fully connected layer and softmax
    (which were trained for 1000-way ImageNet classification)
    are replaced with the two sibling layers described
    earlier (a fully connected layer and softmax over K + 1 categories
    and category-specific bounding-box regressors).
    
    So this function returns the feature extractor from vgg but 
    removing the last max pooling layer.
    And the classifier removing the last fc layer
    :param model: 
    :return: 
    """
    features = list(model.features.children())
    features.pop()  # remove the last max pooling layer

    classifier = list(model.classifier.children())
    classifier.pop()  # remove the last fully connected layer and softmax

    return nn.Sequential(*features), nn.Sequential(*classifier)

And I use it like this:

    extractor, classifier = extract_vgg16(models.vgg16(pretrained=True))
    head = VGG16RoIHead(classifier, 3, (7, 7))
    model = FastRCNN(extractor, head)

However, when I check the output of the feature extractor, it turns to be all zero. Here’s the screenshots


As you can see, the input tensor has no problem, but the feature extractor’s output seems to be strange.

Can you try to count the number of non-zero entries in the feature output?
I have noticed the same sometime ago in VGGFace. The features are really sparse and this would be due to the ReLU unit. Nevertheless, the features (from VGGFace) are pretty good to use in Face recognition atleast.

I think, you need not worry about the nature of features unless they perform really poor in the task.

Now I’ve found that the output of feature extractor is not all-zeros. But the RoIs’ output probs of classifer layer is all the same, even they represent total different area. I wonder if its my way to extract the pretrained VGG model has some problem?

Hi, I find the same problem in my project, and how did you solve it? Is there anything I should pay particular attention to, during extracting feas?

Sorry, it’s been a long time since i deal with the code, i’ve almost forgot it all.

Hi I am having the same problem, the output of pretrained vgg16 is mostly zeros.
Further, its not too different of I feed some random numbers to it…