For this code
model.qconfig = torch.quantization.float_qparams_weight_only_qconfig
print(model.qconfig)
torch.quantization.prepare(model, inplace=True)
torch.quantization.convert(model, inplace=True)
I ve got this error
NotImplementedError: Could not run 'quantized::embedding_bag_prepack' with arguments from the 'QuantizedCUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'quantized::embedding_bag_prepack' is only available for these backends: [QuantizedCPU, BackendSelect, Python, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, Tracer, AutocastCPU, Autocast, Batched, VmapMode, Functionalize].