Why does TorchVision's GoogLeNet implementation have more parameters than that in the original paper?

If I run

model = models.googlenet(pretrained=False)
params_count = sum(p.numel() for p in model.parameters() if p.requires_grad)
print("GoogLeNet number of trainable parameters: {}".format(params_count))

I get

GoogLeNet number of trainable parameters: 13004888

But, from the original paper, the number of parameters should be 6.7977 millions.

What I’m missing? is TorchVision implementing a different version of GoogLeNet respect to the original paper?

Thanks,
Mario

When you skip the aux layers, you should get the ~6 million parameters.
It seems Table 1 of the original paper does the same.

1 Like

Thanks for the explanation ptrblck :blush:!