Hi
im trying to quantize DETR model but I faced with this error:
/usr/local/lib/python3.7/dist-packages/torch/nn/quantized/modules/embedding_ops.py in from_float(cls, mod)
150 dtype = weight_observer.dtype
151
→ 152 assert dtype == torch.quint8, ‘The only supported dtype for nnq.Embedding is torch.quint8’
153
154 # Run the observer to calculate qparams.
AssertionError: The only supported dtype for nnq.Embedding is torch.quint8
can you set the dtype for weight observer to torch.quint8? I think the default is torch.qint8. can you paste the code that’s used to quantize your model?