QAT: AttributeError: 'tuple' object has no attribute 'dequantize'

Hi,

I have done QAT model seems successfully but when I run

_ = model(x)

I got the following error:

    _ = model(x)
  File "/home/paul/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/paul/rknn2/PytorchProject/ssd/r-ssd8.py", line 255, in forward
    x = self.dequant(x)
  File "/home/paul/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/paul/pytorch/lib/python3.6/site-packages/torch/nn/quantized/modules/__init__.py", line 74, in forward
    return Xq.dequantize()
AttributeError: 'tuple' object has no attribute 'dequantize'

The model class is defined as below:

class QuantizedRSSD(nn.Module):
    def __init__(self, model_fp32):
        super(QuantizedRSSD, self).__init__()
        # QuantStub converts tensors from floating point to quantized.
        # This will only be used for inputs.
        self.quant = torch.quantization.QuantStub()
        # DeQuantStub converts tensors from quantized to floating point.
        # This will only be used for outputs.
        self.dequant = torch.quantization.DeQuantStub()
        # FP32 model
        self.model_fp32 = model_fp32

    def forward(self, x):
        x = self.quant(x)
        x = self.model_fp32(x)
        x = self.dequant(x)
        return x

Can anyone help to identify why during the QAT the QuantizedRSSD seems working well but when run it then complains no dequantize attribute? Does the torch.quantization.DeQuantStub() work? or what’s missing?

Thanks a lot for your help.

DeQuantStub only works for Tensors, if you have a tuple output (and both of them needs to be dequantized), you can write something like: x = map(self.dequant, x)

@jerryzh168

Thank you so much for your help. Your solution works!

Then I tried to port my resulting quantized_model to RKNN h/w and it requires to

trace_model = torch.jit.trace(quantized_model, torch.Tensor(1,3,300,300))

before it can convert the quantized_model into rknn model. But when run the torch.jit.trace, I encountered the following error:

RuntimeError: Tracer cannot infer type of <map object at 0x7f308b270cc0>
:Only tensors and (possibly nested) tuples of tensors, lists, or dictsare supported as inputs or outputs of traced functions, but instead got value of type map.

I believe it is the x=map(self.dequant, x) we just introduced to solve the previous problem.

I wonder is there a way to get the torch.jit.trace to work for this case? Thanks a lot for your help again.

the torch.jit.trace works when I manually split the map() into

x1,x2=self.dequant(x1),self.dequant(x2)

1 Like