I am using the pretrained VGG network to implement Fast-RCNN. As mentioned in the paper, i divided the VGG into two parts, the feature and the classifier, Here’s my code.
def extract_vgg16(model):
"""
First, the last max pooling layer is replaced by a RoI
pooling layer that is configured by setting H and W to be
compatible with the net’s first fully connected layer (e.g.,
H = W = 7 for VGG16).
Second, the network’s last fully connected layer and softmax
(which were trained for 1000-way ImageNet classification)
are replaced with the two sibling layers described
earlier (a fully connected layer and softmax over K + 1 categories
and category-specific bounding-box regressors).
So this function returns the feature extractor from vgg but
removing the last max pooling layer.
And the classifier removing the last fc layer
:param model:
:return:
"""
features = list(model.features.children())
features.pop() # remove the last max pooling layer
classifier = list(model.classifier.children())
classifier.pop() # remove the last fully connected layer and softmax
return nn.Sequential(*features), nn.Sequential(*classifier)
And I use it like this:
extractor, classifier = extract_vgg16(models.vgg16(pretrained=True))
head = VGG16RoIHead(classifier, 3, (7, 7))
model = FastRCNN(extractor, head)
However, when I check the output of the feature extractor, it turns to be all zero. Here’s the screenshots
As you can see, the input tensor has no problem, but the feature extractor’s output seems to be strange.