How to specific a op qconfig in "prepare_jit" qconfig_dict

I want to use “prepare_jit” and “convert_jit” to quantize Resnet18. But I can’t specific ‘layer1.0.conv1’ to different qconfig.
my code:
model = models.dict 'resnet18
model = torch.jit.script(model.eval())
qconfig1 = torch.quantization.QConfig(
activation=torch.quantization.HistogramObserver.with_args(
reduce_range=False),
weight=torch.quantization.default_per_channel_weight_observer)
torch.quantization.prepare_jit(model, {‘layer1.0.conv1’:qconfig1}, True)
model(torch.randn(1, 3, 224, 224))
torch.quantization.convert_jit(model, True, False)

But it will fail as below message:
File “/home/xxx/python3.7/site-packages/torch/quantization/quantize_jit.py”, line 58, in _prepare_jit
quant_type)
RuntimeError: torch.torch.nn.modules.conv.___torch_mangle_67.Conv2d (of Python compilation unit at: 0x56088f811c00) is not compatible with the type torch.torch.nn.modules.conv.___torch_mangle_66.Conv2d (of Python compilation unit at: 0x56088f811c00) for the field ‘conv1’

It seems the key ‘layer1.0.conv1’ is not correct.
How can I do?

Is the goal here to only quantize one layer in the entire model with qconfig1? Can you try without specifying the inplace option for prepare_jit and convert_jit?
i.e torch.quantization.prepare_jit(model, {‘layer1.0.conv1’:qconfig1})

cc @jerryzh168 for additional insight.

prepare_jit/convert_jit is no longer being maintained, for automatic quantization please try fx graph mode quantization: (prototype) FX Graph Mode Post Training Static Quantization — PyTorch Tutorials 1.8.0 documentation

if you encounter problems with symbolic tracing, you can take a look at: (prototype) FX Graph Mode Quantization User Guide — PyTorch Tutorials 1.8.0 documentation