Hi,
I tried this code:
net = nn.Sequential(torch.nn.Linear(32, 32),)
net.qconfig = torch.quantization.get_default_qconfig('fbgemm')
net = torch.quantization.prepare(net)
inten = torch.randn(224, 32)
net(inten)
qnet = torch.quantization.convert(net)
print(qnet)
print(qnet.state_dict().keys())
print(qnet.state_dict()['weight'])
Only to find that there is not weight
attribute in the qnet
. How could I make the attribute same as original nn.Linear
?