ONNX op support LpNormalization


pytorch onnx (opset 9) unfolds L2Normalization as multiple operators instead of using LpNormalization operator of ONNX. Is there any clean workaround?

I found out the “Caffe2 & ONNX implementations differ.” But, I don’t know how that relates to Pytorch.

Thanks in advance,

I just found out about onnxconverter-common.

Likewise, opened an issue as part of onnx-optimizer.

We ended up traversing the onnx graph in Python & replacing the ops. Leverage Netron while doing so :slight_smile: