Torch Jit Modules without parameters

Hi. I was trying to plot the weight distribution of the full precision model and its quantized counterpart and encountered the following issue: although “.named_modules()” has all existing modules in the full precision model, the .named_parameters() does not have all weights matching the existing modules—namely, only the nn.BatchNorm3D have parameters.
An extract of a print from q_model.named_parameters() (q_model is a MobileNet3D, post training quantized and saved with torch.jit.save(torch.jit.script(model)…):

features.0.1.weight
features.0.1.bias
features.1.conv.1.weight
features.1.conv.1.bias
features.1.conv.4.weight
features.1.conv.4.bias
features.2.conv.1.weight
features.2.conv.1.bias
features.2.conv.4.weight
features.2.conv.4.bias
features.2.conv.7.weight
features.2.conv.7.bias
Notice layers 0 (Convolution), 2 (ReLU), 3 (Convolution), 5 (ReLU) and 6 (Convolution) are not present (have no weights or biases?).
The modules were not fused for this model, which otherwise could explain the behaviour. Unless fusion is done under the rug. Does anyone have some insights on this?

can you post your code on how you confirm only nn.BatchNorm3d has parameters?
also what error is given when the compressed model? does the same error not appear in the baseline model?

Hi. Thank you for the reply. Neither model gives errors in saving or loading, nor inference. The code I used to check the behaviour was the following:

model, _ = generate_model(args)
fp_model = resume_model(
     (args.model_path / 'full_precision' / 'mobilenet3d_ft_50ep.pth'), model)

quant_model = torch.jit.load((args.model_path / 'quantized' / 'quant_mobilenet3d_ft_50ep.pth').as_posix())

for fp_name, _ in fp_model.named_parameters():
    print(fp_name)

for q_name, _ in quant_model.named_parameters():
    print(q_name)

Just loading both models and checking their parameters. This example is from a MobileNet3D and even though the layers are not named I can easily assess their class based on their index.
Extract from full precision model (missing indices 2 and 5, ReLUs, as expected):

features.17.conv.0.weight
features.17.conv.1.weight
features.17.conv.1.bias
features.17.conv.3.weight
features.17.conv.4.weight
features.17.conv.4.bias
features.17.conv.6.weight
features.17.conv.7.weight
features.17.conv.7.bias

Extract from quantized version (missing ReLUs and Convs, not expected):

features.17.conv.1.weight
features.17.conv.1.bias
features.17.conv.4.weight
features.17.conv.4.bias
features.17.conv.7.weight
features.17.conv.7.bias

Interestingly, I also checked with the quantized version of a SqueezeNet3D which was giving abysmal inference results (random basically) and the list of parameters is empty. This architecture has some 3D pooling modules which are not supported, and I had to compute them in floating-point. Still, I cannot understand how can I get no errors nether in loading, saving or inference.

@AfonsoSalgadoSousa do you have a smaller repro of the issue that we can take a look at?

Hello. Thanks for the reply. I think the easiest way is to try the PyTorch post-quantization tutorial ((beta) Static Quantization with Eager Mode in PyTorch — PyTorch Tutorials 1.7.1 documentation) where I can reproduce the most extreme case of this behaviour (list(myModel.named_parameters()) == []), ‘myModel’ being the quantized version of the full precision model. I try listing the parameters right after ‘torch.quantization.convert(myModel, inplace=True)’.

model.named_parameters() only returns parameters, i.e. instances of torch.Parameter. quantized convs pack weight and bias into a special object which is not using torch.Parameter, which is why it does not show up via named_parameters(). You can inspect the weight and bias of quantized convs by using qconv._weight_bias(). e2e example: gist:2456e16d0f40366d830bcfe176fafe5c · GitHub

Perfect. Thank you so much for the answer. Been struggling with this for a couple of weeks.