Quantized weights of transformer

I am trying to print the quantized weights of the BERT transformer,

> model_name='yoshitomo-matsubara/bert-base-uncased-sst2'
> model = AutoModelForSequenceClassification.from_pretrained(model_name)
> #--Quantize the model to INT8(Fixed point)(pytorch quantization)
> model_dynamic_quantized = torch.quantization.quantize_dynamic(model,
>                                                               qconfig_spec={torch.nn.Linear},
>                                                               dtype=torch.qint8 )
> for v in  model_dynamic_quantized.state_dict().values():
>     print(v.int_repr())

No matter what I try, keep getting this error

NotImplementedError: Could not run ‘aten::int_repr’ with arguments from the ‘CPU’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit Internal Login for possible resolutions. ‘aten::int_repr’ is only available for these backends: [QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].