Printing Quantized Model Weights

How do we print quantized model weights in PyTorch?

To print using normal PyTorch representation, I understand we use the following approach…

print_parameters = lambda model: [print(name, param.data) for name, param in model.named_parameters() if param.requires_grad]

Similarly, if I defined a model as follows…

import torch

class Model(torch.nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = torch.nn.Conv2d(2, 3, 1, bias=False)
        self.conv2 = torch.nn.Conv2d(3, 1, 2, bias=False)

    def forward(self, x):
        x = self.conv1(x)
        x = self.conv2(x)
        return x

model = Model()

Then we can print using pretty straight-forward syntax…

print(model.conv1.weight)
print(model.conv2.weight)

However, both of these approaches fail when the model is converted to a quantized form. Specifically, after the following procedures…

model.qconfig = torch.quantization.default_qconfig
torch.quantization.prepare(model, inplace=True)
torch.quantization.convert(model, inplace=True)

Printing model.conv1.weight returns a method and the loop provided at the beginning does not print anything.

We have recently prototyped PyTorch Numeric Suite that can compare the numerics between float and quantized model. It’s in nightly build and will be in 1.6.

You can take a look at the following example to see how to compare the weights using it: