Static Quantization of nn.ConstantPad2d

I have a module like this -
self.conv = nn.Sequence(nn.ConstantPad2d((1,2,1,2)), nn.Conv2d(...))

Model converts successfully into quantized form but when I try to evaluate it, I get this error -

RuntimeError: Could not run ‘aten::empty.memory_format’ with arguments from the ‘QuantizedCPU’ backend. ‘aten::empty.memory_format’ is only available
for these backends: [CPU, CUDA, MkldnnCPU, SparseCPU, SparseCUDA, BackendSelect, Autograd, Profiler, Tracer].

I think this is because quantization of nn.ConstantPad2d is not supported. So, any solution around it?

I cannot merge ConstantPad2d and Conv2d because Conv2d don’t support odd paddings (equivalent of nn.ConstantPad2d((1,2,1,2))) .

I think @Zafar is working on supporting constant pad right now: https://github.com/pytorch/pytorch/pull/43304