AssertionError: The only supported dtype for nnq.Embedding is torch.quint8

Hi
im trying to quantize DETR model but I faced with this error:

/usr/local/lib/python3.7/dist-packages/torch/nn/quantized/modules/embedding_ops.py in from_float(cls, mod)
150 dtype = weight_observer.dtype
151
→ 152 assert dtype == torch.quint8, ‘The only supported dtype for nnq.Embedding is torch.quint8’
153
154 # Run the observer to calculate qparams.

AssertionError: The only supported dtype for nnq.Embedding is torch.quint8

can you set the dtype for weight observer to torch.quint8? I think the default is torch.qint8. can you paste the code that’s used to quantize your model?

thank you for replay,

weights= ‘./model.pth’

load model

model = torch.hub.load(‘facebookresearch/detr’, ‘detr_resnet50’, pretrained=False, num_classes=7)
checkpoint = torch.load(weights), map_location=device)
model.load_state_dict(checkpoint, strict=False)
model.eval()
#model.fuse()
model.qconfig = torch.quantization.get_default_qconfig(‘fbgemm’)
qmodel = torch.quantization.prepare(model)
qmodel_comp = torch.quantization.convert(qmodel)

I think you need to set the qconfig for the embedding module to be https://github.com/pytorch/pytorch/blob/1edf6f56477ea317af845a4cd65eb311737961f0/torch/ao/quantization/qconfig.py#L141