Export model with custom CUDA op in Pytorch 2.0 raises AttributeError: 'function' object has no attribute 'to_function_proto'

Disclaimer: I’m not sure if this is a bug or if I’m missing something.

I’m trying to export to ONNX a Torch model that uses some custom CUDA ops. Following the torch.onnx guide, I define a symbolic function similar to:

def my_symbolic_forward(g, arg0, arg1, int_arg2):
            arg2_i = sym_help._maybe_get_const(int_arg2, "i")
            # [omitted code where I calculate shape information]
            return g.op(
                "my_namespace::my_forward",
                arg0,
                arg1,
                arg2_i=arg2_i,
            )

The function is then registered via:

torch.onnx.register_custom_op_symbolic("my_namespace::my_forward", my_symbolic_forward, 1)

This was tested and works in Torch 1.9 and 1.13, however in Torch 2.0.1 I get an AttributeError that looks related to onnxscript. The error is triggeret at this line, in function _find_onnxscript_op, which looks odd, considering I did not go the onnxscript-way to export the model with my custom op.

Does this look like a bug (in which case I’ll open an issue on GitHub), or did I miss something telling pytorch that it should not treat this as onnxscript?

Based on your description the issue sounds like a real regression so I would recommend creating an issue on GitHub so that the code owners are aware of it.
With that being said, I assume you are using TorchScript under the hood to export to ONNX, which is in maintenance now.

Thanks, I will open a github issue as soon as I have a chance and I’ll post a link to it here as well for bookkeeping.

I am indeed using TorchScript under the hood, but I am not aware of any other way to export to ONNX… are there alternatives?

Not yet, but the export path is in development as described here.

I have the same issue on 2.0.1, and can confirm that it is fixed by reverting to 1.13.1

I managed to reproduce the behaviour and opened a Github issue here.

1 Like

Just updating here as well: the GitHub issue has been resolved, so I guess the fix will be in the next version update (hopefully)