When i run QAT, training is normal, but when i want to evaluate the qat model, an error length of scales must equal to channel confuse me.
I use pytorch 1.4.0, and my code is
# Traing is Normal
net = MyQATNet()
net.fuse_model()
net.qconfig = torch.quantization.get_default_qat_config("fbgemm")
net_eval = net
model.apply(torch.quantization.enable_observer)
model.apply(torch.quantization.enable_fake_quant)
# Evaluate
qat_model = copy.deepcopy(net_eval)
qat_model.to(torch.device('cpu'))
torch.quantization.convert(qat_model, inplace=True) # Error is there
qat_model.eval()
How can i save the qat trained model, when i save torch.save(qat_model.state_dict(),'qat_model.pth') or i directly save training model torch.save(net, 'net.pth'), when i want to load the pretrained qat model, for qat_model, the key is like conv1.0.activation_post_process.scale; and when net, the key have no conv1.0.activation_post_process.scale, but expected key is conv1.0.0.activation_post_process.scale, so KeyError happened. When i see the model definition, expected key is right.