I have done QAT model seems successfully but when I run
_ = model(x)
I got the following error:
_ = model(x) File "/home/paul/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/paul/rknn2/PytorchProject/ssd/r-ssd8.py", line 255, in forward x = self.dequant(x) File "/home/paul/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/paul/pytorch/lib/python3.6/site-packages/torch/nn/quantized/modules/__init__.py", line 74, in forward return Xq.dequantize() AttributeError: 'tuple' object has no attribute 'dequantize'
The model class is defined as below:
class QuantizedRSSD(nn.Module): def __init__(self, model_fp32): super(QuantizedRSSD, self).__init__() # QuantStub converts tensors from floating point to quantized. # This will only be used for inputs. self.quant = torch.quantization.QuantStub() # DeQuantStub converts tensors from quantized to floating point. # This will only be used for outputs. self.dequant = torch.quantization.DeQuantStub() # FP32 model self.model_fp32 = model_fp32 def forward(self, x): x = self.quant(x) x = self.model_fp32(x) x = self.dequant(x) return x
Can anyone help to identify why during the QAT the QuantizedRSSD seems working well but when run it then complains no dequantize attribute? Does the torch.quantization.DeQuantStub() work? or what’s missing?
Thanks a lot for your help.