Quantization ERROR :embedding_bag_prepac

For this code

model.qconfig = torch.quantization.float_qparams_weight_only_qconfig

torch.quantization.prepare(model, inplace=True)
torch.quantization.convert(model, inplace=True)

I ve got this error

NotImplementedError: Could not run 'quantized::embedding_bag_prepack' with arguments from the 'QuantizedCUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'quantized::embedding_bag_prepack' is only available for these backends: [QuantizedCPU, BackendSelect, Python, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, Tracer, AutocastCPU, Autocast, Batched, VmapMode, Functionalize].

Collab to reproduce Google Colab

Hi @AlexWortega , this error message means that the kernel you are trying to run does not have an implementation on CUDA. Can you try moving your model to CPU and trying on CPU?