Question about how to load quantized model

hi, I quantized my model and I can save model succesfully, but when i evaluta model with torch.jit. there are some thing error:

        feature2 = self.L2normalize(feature2)
  File "/media/zst/8ec88aab-d885-4801-98ab-e3181c65261b/A/pro/", line 20, in L2normalize
    def L2normalize(self, x):
        eps = 1e-6
        norm = x ** 2
               ~~~~~~ <--- HERE
        norm = norm.sum(dim=1, keepdim=True) + eps
        #norm = torch.sum(norm,1) + eps
RuntimeError: Could not run 'aten::pow.Tensor_Scalar' with arguments from the 'QuantizedCPU' backend. 'aten::pow.Tensor_Scalar' is only available for these backends: [CPU, CUDA, SparseCPU, SparseCUDA, Named, Autograd, Profiler, Tracer, Autocast].

but when I test model with torch.load:

	Unexpected key(s) in state_dict: "conv1a.0.scale", "conv1a.0.zero_point", "conv1aa.0.scale", "conv1aa.0.zero_point", "conv1b.0.scale", "conv1b.0.zero_point", "conv2a.0.scale", "conv2a.0.zero_point", "conv2aa.0.scale", "conv2aa.0.zero_point", "conv2b.0.scale", "conv2b.0.zero_point", "conv3a.0.scale", "conv3a...
While copying the parameter named "conv1a.0.weight", whose dimensions in the model are torch.Size([16, 3, 3, 3]) and whose dimensions in the checkpoint are torch.Size([16, 3, 3, 3]), an exception occured : ('Copying from quantized Tensor to non-quantized Tensor is not allowed, please use dequantize to get a float Tensor from a quantized Tensor',)...

We don;t support quantized op implementation of aten::pow currently. To get around this you can add a quant/dequant stub around your operator.

Please refer to the tutorial for how to do this -