A general question regarding exported graphs using Dynamo

Hello all,

I have been learning about Torch Compile and its components, I do remember that when Torch 2.0 was announced for the first time it addressed that many models can’t get its graph exported due the nature of graph breaks … etc.

The question is, when I optimize a function/nn.Module is there a way to get the graph/s exported into ONNX like using torch.onnx.export. Or is the intention of using dynamo to have the “optimized” code run using PyTorch frontend only ?

As it seems Dynamo will help if a nn.Module can’t be exported easily using torch.onnx.export and outputs the “exportable” subgraphs and points out which parts of the function/nn.Module caused graph break/s (I think with python frame evaluation thing?)

You’re correct in that torch.compile() is not an export workflow it’s a JIT workflow. If you want the whole graph then you can run torch._dynamo.export() but that’s still relatively early days and not ready for large scale customer adoption

That said you can see exactly how the ONNX backend works here pytorch/onnxrt.py at main · pytorch/pytorch · GitHub but a high level dynamo sends over 1-n subgraphs which are then exported using the existing ONNX exporter

Hi, thanks for your reply!

I understand now that torch.compile is not an export workflow.

I tried to use torch._dynamo.export() to check how it will work when having a graph break and didn’t work (there were calls to “unimplemented” in the error stack) so I guess it is still not stable as you mentioned its in its early days.

Error stack just to double check:

Blockquote
Traceback (most recent call last):
File “dynamo_playground.py”, line 32, in
torch._dynamo.export(model, example_inputs[0])
File “/home/abdulaziz/miniconda3/envs/torch/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py”, line 777, in export
result_traced = opt_f(*args, **kwargs)
File “/home/abdulaziz/miniconda3/envs/torch/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py”, line 117, in call
return self.dynamo_ctx(self._orig_mod.call)(*args, **kwargs)
File “/home/abdulaziz/miniconda3/envs/torch/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py”, line 253, in _fn
return fn(*args, **kwargs)
File “/home/abdulaziz/miniconda3/envs/torch/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 1501, in _call_impl
return forward_call(*args, **kwargs)
File “/home/abdulaziz/miniconda3/envs/torch/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py”, line 402, in catch_errors
return callback(frame, cache_size, hooks, frame_state)
File “/home/abdulaziz/miniconda3/envs/torch/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py”, line 117, in _fn
return fn(*args, **kwargs)
File “/home/abdulaziz/miniconda3/envs/torch/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py”, line 318, in _convert_frame_assert
return _compile(
File “/home/abdulaziz/miniconda3/envs/torch/lib/python3.8/site-packages/torch/_dynamo/utils.py”, line 169, in time_wrapper
r = func(*args, **kwargs)
File “/home/abdulaziz/miniconda3/envs/torch/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py”, line 386, in _compile
out_code = transform_code_object(code, transform)
File “/home/abdulaziz/miniconda3/envs/torch/lib/python3.8/site-packages/torch/_dynamo/bytecode_transformation.py”, line 683, in transform_code_object
transformations(instructions, code_options)
File “/home/abdulaziz/miniconda3/envs/torch/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py”, line 373, in transform
tracer.run()
File “/home/abdulaziz/miniconda3/envs/torch/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py”, line 1899, in run
super().run()
File “/home/abdulaziz/miniconda3/envs/torch/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py”, line 611, in run
and self.step()
File “/home/abdulaziz/miniconda3/envs/torch/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py”, line 571, in step
getattr(self, inst.opname)(inst)
File “/home/abdulaziz/miniconda3/envs/torch/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py”, line 351, in wrapper
return inner_fn(self, inst)
File “/home/abdulaziz/miniconda3/envs/torch/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py”, line 1022, in CALL_FUNCTION
self.call_function(fn, args, {})
File “/home/abdulaziz/miniconda3/envs/torch/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py”, line 502, in call_function
self.push(fn.call_function(self, args, kwargs))
File “/home/abdulaziz/miniconda3/envs/torch/lib/python3.8/site-packages/torch/_dynamo/variables/builtin.py”, line 596, in call_function
return super().call_function(tx, args, kwargs)
File “/home/abdulaziz/miniconda3/envs/torch/lib/python3.8/site-packages/torch/_dynamo/variables/base.py”, line 230, in call_function
unimplemented(f"call_function {self} {args} {kwargs}")
File “/home/abdulaziz/miniconda3/envs/torch/lib/python3.8/site-packages/torch/_dynamo/exc.py”, line 107, in unimplemented
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: call_function BuiltinVariable(print) [ConstantVariable(str)] {}

from user code:
File “dynamo_playground.py”, line 14, in forward
print(“graph break”)
You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True

However, as a workaround I tried using:
explanation, out_guards, graphs, ops_per_graph, break_reasons, explanation_verbose = torch._dynamo.explain(model, example_input)
and it worked well. Extracted two subgraphs and I was able to convert it to onnx using the link you provided for onnxrt backend in dynamo.

Since the two functions should at least return subgraphs, should I open an issue that torch._dynamo.export not working when receiving a graph break or that’s the intended behavior for now.

Thanks again for your answer !

I believe that’s intended, a print is a graph break so export doesn’t know how to export that so you get an error. If you want a partial graphs then torch.compile is the way to go

Thanks for your answer. It is reasonable, but was confusing as why torch._dynamo.explain produces several subgraphs. I will experiment more with it.