Would the weights be float or int?

I am trying implement post quantization training for 8 bits say. I saw a custom code using pytorch. But, I am really confused. I trained the model and test it first. Then I quantize the model to 8 bits. But, the weights are still floating point numbers. Should not it be in the range 0 to 255? Please clarify.

Hello @Kai123

Are you sure that weights are floats not the outputs of the model?

Could please provide code snippet where you see it

If I try this simple example from the docs

import torch

# define a floating point model
class M(torch.nn.Module):
    def __init__(self):
        super(M, self).__init__()
        self.fc = torch.nn.Linear(4, 4)

    def forward(self, x):
        x = self.fc(x)
        return x

# create a model instance
model_fp32 = M()
# create a quantized model instance
model_int8 = torch.quantization.quantize_dynamic(
    model_fp32,  # the original model
    {torch.nn.Linear},  # a set of layers to dynamically quantize
    dtype=torch.qint8)  # the target dtype for quantized weights

# run the model
input_fp32 = torch.randn(4, 4, 4, 4)
res = model_int8(input_fp32)

I can check the weights of fc layer with
print(model_int8.fc.weight())

And the output will be:

tensor([[-0.0307,  0.1806,  0.3189,  0.0692],
        [ 0.1306, -0.1114,  0.1306, -0.4227],
        [-0.1575, -0.1729, -0.4841, -0.2997],
        [ 0.2036,  0.0307,  0.4880,  0.3804]], size=(4, 4), dtype=torch.qint8,
       quantization_scheme=torch.per_tensor_affine, scale=0.0038423435762524605,
       zero_point=0)

It says that weights are of qint8 type

Thank you for time and response. You have shown the weights of the fc layer of the quantized model. And here is my question. The weights are float value. Why is it so? Should not be it in the range of 0 to 255? I think I misunderstood something. Please clarify.
If I run this :print(model_fp32.fc.weight)
Parameter containing:
tensor([[ 0.1790, -0.3491, -0.1060, 0.0618],
[ 0.4251, -0.0054, -0.4578, 0.0406],
[-0.2153, -0.1724, -0.1873, -0.2386],
[-0.3372, -0.2659, -0.4900, -0.2144]], requires_grad=True)

If I run this: print(model_int8.fc.weight())
tensor([[ 0.1806, -0.3497, -0.1076, 0.0615],
[ 0.4266, -0.0038, -0.4573, 0.0423],
[-0.2152, -0.1729, -0.1883, -0.2383],
[-0.3382, -0.2652, -0.4919, -0.2152]], size=(4, 4), dtype=torch.qint8,
quantization_scheme=torch.per_tensor_affine, scale=0.0038432059809565544,
zero_point=0)
For both fp32 and int8 model. the weights are almost similar. And both the weights are floating point numbers.

I see, the problem is understood.
I suspect but not claim that numbers you see in troch.qint8 tensor is just a representation of integers actually stored there. I mean it implicitly applies dequantization with zero_point and scale to print out results.

1 Like