Onnx-tf -> com.microsoft::FusedMatMul operator not implemented

Hello,

I need to convert a model that is implemented in pytorch to tensorflow lite. I have exported the model to onnx using torch.onnx.export(). The resulting onnx model works very well with the onnx-runtime. But when I try to use onnx-tf to convert the model to tensorflow, I get an error.

import onnx
from onnx_tf.backend import prepare

onnx_model = onnx.load("my_model.onnx")
tf_rep = prepare(onnx_model)
tf_rep.export_graph('./tf_export/my_model')

The error message is

...
raise BackendIsNotSupposedToImplementIt("{} is not implemented.".format(

    BackendIsNotSupposedToImplementIt: FusedMatMul is not implemented.

So it tells me that the operation com.microsoft::FusedMatMul is not supported by onnx-tf.

What can I do from here? Can I add operators from com.microsoft to onnx-tf? Can I tell the pytorch onnx exporter not to use operations from a specific domain like com.microsoft? Or can I substitute the unsupported operation?

Any ideas are welcome :slight_smile:

You might want to post this question to the TF discussion board as you would find TF experts there.

yes, you are right. I will do that.

So the pytorch specific part of my questions is:
Can I tell the pytorch onnx exporter not to use operations from a specific domain like com.microsoft?

I don’t know if the PyTorch export process decides what exactly will be fused as it seems ONNXRuntime defines it here, so PyTorch might not be in control about the optimizations ONNXRT does.

but is the onnx runtime used by pytorch for producing the .onnx file? I thought the onnx runtime is only used for inference? I thought pytorch only relies on onnx for exporting the .onnx file?

I don’t know since I cannot find any reference to FusedMatMul in onnx/onnx.