Quantization of Reflection Padding?

Hi, please enlighten me. How can I write a custom function to overcome this problem?
Is it possible to replace padding in the pre-trained model with some Quantization supported operator?

Quantization support for reflection_pad1d was added in https://github.com/pytorch/pytorch/pull/37452. cc @Zafar

you can follow that PR to add support for additional reflection_pad operators if they aren’t supported.