nn.Linear does not have `weight` any more after quantized

Hi,

I tried this code:

net = nn.Sequential(torch.nn.Linear(32, 32),)
net.qconfig = torch.quantization.get_default_qconfig('fbgemm')
net = torch.quantization.prepare(net)
inten = torch.randn(224, 32)
net(inten)

qnet = torch.quantization.convert(net)
print(qnet)
print(qnet.state_dict().keys())
print(qnet.state_dict()['weight'])

Only to find that there is not weight attribute in the qnet. How could I make the attribute same as original nn.Linear ?

The weight for quantized Linear is packed. You can access it by doing linear_layer_instance._weight_bias() (see pytorch/linear.py at master · pytorch/pytorch · GitHub).

1 Like