import torch
import torch._dynamo as dynamo
@torch.compile(mode="reduce-overhead")
def toy_example(a, b):
x = a / (torch.abs(a) + 1)
if b.sum() < 0:
b = b * -1
return x * b
a = torch.randn(10, device="cuda")
b = torch.randn(10, device="cuda")
x = dynamo.explain(toy_example)(a, b)
print(x.break_reasons)
print(x.graph_break_count)
print(x.graph_count)
for i in range(5):
a = torch.randn(10, device="cuda")
b = torch.randn(10, device="cuda")
print(toy_example(a, b))
If we consider the function toy example, it has a graph break. We can print the graph break reason with dynamo.explain
as shown above.
But that requires making changes in the source python/pytorch program. I was wondering of a cleaner way, by setting certain environment variables which shall help us achieve the same.
I tried looking into export TORCH_LOGS="graph_breaks"
but it does not give out any information about the graph break. Why so? (But in cases where it does, it does not state the same reason as dynamo.explain
does. But rather it tries to print a stack trace showing which line in the source code caused the graph to break).
But when I use export TORCH_LOGS="+dynamo"
, I get something as:
V0727 00:09:20.943000 140387053281280 torch/_dynamo/convert_frame.py:775] [0/0] torchdynamo start compiling toy_example /media/abhishek/Abhishek_NVMe/shweta_machine/trace_analysis/graph_break_analysis/conditional.py:4, stack (elided 6 frames):
V0727 00:09:20.943000 140387053281280 torch/_dynamo/convert_frame.py:775] [0/0] File "/media/abhishek/Abhishek_NVMe/shweta_machine/trace_analysis/graph_break_analysis/conditional.py", line 14, in <module>
V0727 00:09:20.943000 140387053281280 torch/_dynamo/convert_frame.py:775] [0/0] x = dynamo.explain(toy_example)(a, b)
V0727 00:09:20.943000 140387053281280 torch/_dynamo/convert_frame.py:775] [0/0] File "/media/abhishek/Abhishek_NVMe/shweta_machine/trace_analysis/pytorch-venv-front-end-unmodified/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 804, in inner
V0727 00:09:20.943000 140387053281280 torch/_dynamo/convert_frame.py:775] [0/0] opt_f(*args, **kwargs)
V0727 00:09:20.943000 140387053281280 torch/_dynamo/convert_frame.py:775] [0/0] File "/media/abhishek/Abhishek_NVMe/shweta_machine/trace_analysis/pytorch-venv-front-end-unmodified/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 432, in _fn
V0727 00:09:20.943000 140387053281280 torch/_dynamo/convert_frame.py:775] [0/0] return fn(*args, **kwargs)
V0727 00:09:20.943000 140387053281280 torch/_dynamo/convert_frame.py:775] [0/0] File "/media/abhishek/Abhishek_NVMe/shweta_machine/trace_analysis/pytorch-venv-front-end-unmodified/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1115, in __call__
V0727 00:09:20.943000 140387053281280 torch/_dynamo/convert_frame.py:775] [0/0] return self._torchdynamo_orig_callable(
V0727 00:09:20.943000 140387053281280 torch/_dynamo/convert_frame.py:775] [0/0]
I0727 00:09:20.944000 140387053281280 torch/_dynamo/logging.py:55] [0/0] Step 1: torchdynamo start tracing toy_example /media/abhishek/Abhishek_NVMe/shweta_machine/trace_analysis/graph_break_analysis/conditional.py:4
V0727 00:09:20.944000 140387053281280 torch/fx/experimental/symbolic_shapes.py:2530] [0/0] create_env
V0727 00:09:20.949000 140387053281280 torch/_dynamo/symbolic_convert.py:774] [0/0] [__trace_source] TRACE starts_line /media/abhishek/Abhishek_NVMe/shweta_machine/trace_analysis/graph_break_analysis/conditional.py:6 in toy_example (toy_example)
V0727 00:09:20.949000 140387053281280 torch/_dynamo/symbolic_convert.py:774] [0/0] [__trace_source] x = a / (torch.abs(a) + 1)
V0727 00:09:20.950000 140387053281280 torch/_dynamo/symbolic_convert.py:797] [0/0] [__trace_bytecode] TRACE LOAD_FAST a []
V0727 00:09:20.950000 140387053281280 torch/_dynamo/symbolic_convert.py:797] [0/0] [__trace_bytecode] TRACE LOAD_GLOBAL torch [LazyVariableTracker()]
V0727 00:09:20.951000 140387053281280 torch/_dynamo/symbolic_convert.py:797] [0/0] [__trace_bytecode] TRACE LOAD_ATTR abs [LazyVariableTracker(), PythonModuleVariable(<module 'torch' from '/media/abhishek/Abhishek_NVMe/shweta_machine/trace_analysis/pytorch-venv-front-end-unmodified/lib/python3.10/site-packages/torch/__init__.py'>)]
V0727 00:09:20.952000 140387053281280 torch/_dynamo/symbolic_convert.py:797] [0/0] [__trace_bytecode] TRACE LOAD_FAST a [LazyVariableTracker(), TorchInGraphFunctionVariable(<built-in method abs of type object at 0x7fae66a3a500>)]
V0727 00:09:20.952000 140387053281280 torch/_dynamo/symbolic_convert.py:797] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION 1 [LazyVariableTracker(), TorchInGraphFunctionVariable(<built-in method abs of type object at 0x7fae66a3a500>), LazyVariableTracker()]
V0727 00:09:20.953000 140387053281280 torch/_dynamo/output_graph.py:2029] [0/0] create_graph_input L_a_ L['a']
V0727 00:09:20.953000 140387053281280 torch/_dynamo/variables/builder.py:2268] [0/0] wrap_to_fake L['a'] (10,) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>], constraint_sizes=[None], view_base_context=None, tensor_source=LocalSource(local_name='a', cell_or_freevar=False), shape_env_to_source_to_symbol_cache={}) <class 'torch.Tensor'>
V0727 00:09:20.955000 140387053281280 torch/_dynamo/symbolic_convert.py:797] [0/0] [__trace_bytecode] TRACE LOAD_CONST 1 [LazyVariableTracker(), TensorVariable()]
V0727 00:09:20.955000 140387053281280 torch/_dynamo/symbolic_convert.py:797] [0/0] [__trace_bytecode] TRACE BINARY_ADD None [LazyVariableTracker(), TensorVariable(), ConstantVariable()]
V0727 00:09:20.956000 140387053281280 torch/_dynamo/symbolic_convert.py:797] [0/0] [__trace_bytecode] TRACE BINARY_TRUE_DIVIDE None [LazyVariableTracker(), TensorVariable()]
V0727 00:09:20.957000 140387053281280 torch/_dynamo/symbolic_convert.py:797] [0/0] [__trace_bytecode] TRACE STORE_FAST x [TensorVariable()]
V0727 00:09:20.958000 140387053281280 torch/_dynamo/symbolic_convert.py:774] [0/0] [__trace_source] TRACE starts_line /media/abhishek/Abhishek_NVMe/shweta_machine/trace_analysis/graph_break_analysis/conditional.py:7 in toy_example (toy_example)
V0727 00:09:20.958000 140387053281280 torch/_dynamo/symbolic_convert.py:774] [0/0] [__trace_source] if b.sum() < 0:
V0727 00:09:20.958000 140387053281280 torch/_dynamo/symbolic_convert.py:797] [0/0] [__trace_bytecode] TRACE LOAD_FAST b []
V0727 00:09:20.958000 140387053281280 torch/_dynamo/symbolic_convert.py:797] [0/0] [__trace_bytecode] TRACE LOAD_ATTR sum [LazyVariableTracker()]
V0727 00:09:20.958000 140387053281280 torch/_dynamo/output_graph.py:2029] [0/0] create_graph_input L_b_ L['b']
V0727 00:09:20.958000 140387053281280 torch/_dynamo/variables/builder.py:2268] [0/0] wrap_to_fake L['b'] (10,) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>], constraint_sizes=[None], view_base_context=None, tensor_source=LocalSource(local_name='b', cell_or_freevar=False), shape_env_to_source_to_symbol_cache={}) <class 'torch.Tensor'>
V0727 00:09:20.959000 140387053281280 torch/_dynamo/symbolic_convert.py:797] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION 0 [GetAttrVariable()]
V0727 00:09:20.960000 140387053281280 torch/_dynamo/symbolic_convert.py:797] [0/0] [__trace_bytecode] TRACE LOAD_CONST 0 [TensorVariable()]
V0727 00:09:20.960000 140387053281280 torch/_dynamo/symbolic_convert.py:797] [0/0] [__trace_bytecode] TRACE COMPARE_OP < [TensorVariable(), ConstantVariable()]
V0727 00:09:20.961000 140387053281280 torch/_dynamo/symbolic_convert.py:797] [0/0] [__trace_bytecode] TRACE POP_JUMP_IF_FALSE 38 [TensorVariable()]
V0727 00:09:20.961000 140387053281280 torch/_dynamo/symbolic_convert.py:322] [0/0] generic_jump triggered compile
V0727 00:09:20.962000 140387053281280 torch/_dynamo/output_graph.py:971] [0/0] COMPILING GRAPH due to GraphCompileReason(reason='generic_jump TensorVariable()', user_stack=[<FrameSummary file /media/abhishek/Abhishek_NVMe/shweta_machine/trace_analysis/graph_break_analysis/conditional.py, line 7 in toy_example>], graph_break=True)
V0727 00:09:20.963000 140387053281280 torch/_dynamo/output_graph.py:1290] [0/0] [__graph_code] TRACED GRAPH
V0727 00:09:20.963000 140387053281280 torch/_dynamo/output_graph.py:1290] [0/0] [__graph_code] ===== __compiled_fn_1 =====
V0727 00:09:20.963000 140387053281280 torch/_dynamo/output_graph.py:1290] [0/0] [__graph_code] /media/abhishek/Abhishek_NVMe/shweta_machine/trace_analysis/pytorch-venv-front-end-unmodified/lib/python3.10/site-packages/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
V0727 00:09:20.963000 140387053281280 torch/_dynamo/output_graph.py:1290] [0/0] [__graph_code] def forward(self, L_a_: "f32[10]", L_b_: "f32[10]"):
V0727 00:09:20.963000 140387053281280 torch/_dynamo/output_graph.py:1290] [0/0] [__graph_code] l_a_ = L_a_
V0727 00:09:20.963000 140387053281280 torch/_dynamo/output_graph.py:1290] [0/0] [__graph_code] l_b_ = L_b_
V0727 00:09:20.963000 140387053281280 torch/_dynamo/output_graph.py:1290] [0/0] [__graph_code]
V0727 00:09:20.963000 140387053281280 torch/_dynamo/output_graph.py:1290] [0/0] [__graph_code] # File: /media/abhishek/Abhishek_NVMe/shweta_machine/trace_analysis/graph_break_analysis/conditional.py:6 in toy_example, code: x = a / (torch.abs(a) + 1)
V0727 00:09:20.963000 140387053281280 torch/_dynamo/output_graph.py:1290] [0/0] [__graph_code] abs_1: "f32[10]" = torch.abs(l_a_)
V0727 00:09:20.963000 140387053281280 torch/_dynamo/output_graph.py:1290] [0/0] [__graph_code] add: "f32[10]" = abs_1 + 1; abs_1 = None
V0727 00:09:20.963000 140387053281280 torch/_dynamo/output_graph.py:1290] [0/0] [__graph_code] x: "f32[10]" = l_a_ / add; l_a_ = add = None
V0727 00:09:20.963000 140387053281280 torch/_dynamo/output_graph.py:1290] [0/0] [__graph_code]
V0727 00:09:20.963000 140387053281280 torch/_dynamo/output_graph.py:1290] [0/0] [__graph_code] # File: /media/abhishek/Abhishek_NVMe/shweta_machine/trace_analysis/graph_break_analysis/conditional.py:7 in toy_example, code: if b.sum() < 0:
V0727 00:09:20.963000 140387053281280 torch/_dynamo/output_graph.py:1290] [0/0] [__graph_code] sum_1: "f32[]" = l_b_.sum(); l_b_ = None
V0727 00:09:20.963000 140387053281280 torch/_dynamo/output_graph.py:1290] [0/0] [__graph_code] lt: "b8[]" = sum_1 < 0; sum_1 = None
V0727 00:09:20.963000 140387053281280 torch/_dynamo/output_graph.py:1290] [0/0] [__graph_code] return (x, lt)
V0727 00:09:20.963000 140387053281280 torch/_dynamo/output_graph.py:1290] [0/0] [__graph_code]
V0727 00:09:20.963000 140387053281280 torch/_dynamo/output_graph.py:1290] [0/0] [__graph_code]
I0727 00:09:20.964000 140387053281280 torch/_dynamo/logging.py:55] [0/0] Step 2: calling compiler function dynamo_graph_accumulating_compiler
I0727 00:09:20.964000 140387053281280 torch/_dynamo/logging.py:55] [0/0] Step 2: done compiler function dynamo_graph_accumulating_compiler
I0727 00:09:20.985000 140387053281280 torch/fx/experimental/symbolic_shapes.py:3633] [0/0] produce_guards
V0727 00:09:20.985000 140387053281280 torch/fx/experimental/symbolic_shapes.py:3815] [0/0] track_symint L['a'].size()[0] 10 None
V0727 00:09:20.985000 140387053281280 torch/fx/experimental/symbolic_shapes.py:3815] [0/0] track_symint L['a'].stride()[0] 1 None
V0727 00:09:20.985000 140387053281280 torch/fx/experimental/symbolic_shapes.py:3815] [0/0] track_symint L['a'].storage_offset() 0 None
V0727 00:09:20.985000 140387053281280 torch/fx/experimental/symbolic_shapes.py:3815] [0/0] track_symint L['b'].size()[0] 10 None
V0727 00:09:20.986000 140387053281280 torch/fx/experimental/symbolic_shapes.py:3815] [0/0] track_symint L['b'].stride()[0] 1 None
V0727 00:09:20.986000 140387053281280 torch/fx/experimental/symbolic_shapes.py:3815] [0/0] track_symint L['b'].storage_offset() 0 None
V0727 00:09:20.986000 140387053281280 torch/fx/experimental/symbolic_shapes.py:3979] [0/0] Skipping guard L['a'].size()[0] == 10
V0727 00:09:20.986000 140387053281280 torch/fx/experimental/symbolic_shapes.py:3979] [0/0] Skipping guard L['a'].stride()[0] == 1
V0727 00:09:20.986000 140387053281280 torch/fx/experimental/symbolic_shapes.py:3979] [0/0] Skipping guard L['a'].storage_offset() == 0
V0727 00:09:20.986000 140387053281280 torch/fx/experimental/symbolic_shapes.py:3979] [0/0] Skipping guard L['b'].size()[0] == 10
V0727 00:09:20.986000 140387053281280 torch/fx/experimental/symbolic_shapes.py:3979] [0/0] Skipping guard L['b'].stride()[0] == 1
V0727 00:09:20.986000 140387053281280 torch/fx/experimental/symbolic_shapes.py:3979] [0/0] Skipping guard L['b'].storage_offset() == 0
V0727 00:09:20.987000 140387053281280 torch/_dynamo/guards.py:2168] [0/0] [__guards] GUARDS:
V0727 00:09:20.987000 140387053281280 torch/_dynamo/guards.py:2147] [0/0] [__guards]
V0727 00:09:20.987000 140387053281280 torch/_dynamo/guards.py:2147] [0/0] [__guards] TREE_GUARD_MANAGER:
V0727 00:09:20.987000 140387053281280 torch/_dynamo/guards.py:2147] [0/0] [__guards] +- RootGuardManager
V0727 00:09:20.987000 140387053281280 torch/_dynamo/guards.py:2147] [0/0] [__guards] | +- DEFAULT_DEVICE: utils_device.CURRENT_DEVICE == None # _dynamo/output_graph.py:459 in init_ambient_guards
V0727 00:09:20.987000 140387053281280 torch/_dynamo/guards.py:2147] [0/0] [__guards] | +- GLOBAL_STATE: ___check_global_state()
V0727 00:09:20.987000 140387053281280 torch/_dynamo/guards.py:2147] [0/0] [__guards] | +- GuardManager: source=L['a'], accessed_by=DictGetItemGuardAccessor(a)
V0727 00:09:20.987000 140387053281280 torch/_dynamo/guards.py:2147] [0/0] [__guards] | | +- TENSOR_MATCH: check_tensor(L['a'], Tensor, DispatchKeySet(CUDA, BackendSelect, ADInplaceOrView, AutogradCUDA), torch.float32, device=0, requires_grad=False, size=[10], stride=[1]) # x = a / (torch.abs(a) + 1) # graph_break_analysis/conditional.py:6 in toy_example
V0727 00:09:20.987000 140387053281280 torch/_dynamo/guards.py:2147] [0/0] [__guards] | | +- NO_HASATTR: hasattr(L['a'], '_dynamo_dynamic_indices') == False # x = a / (torch.abs(a) + 1) # graph_break_analysis/conditional.py:6 in toy_example
V0727 00:09:20.987000 140387053281280 torch/_dynamo/guards.py:2147] [0/0] [__guards] | | +- NO_TENSOR_ALIASING: check_no_aliasing(L['a'], L['b'])
V0727 00:09:20.987000 140387053281280 torch/_dynamo/guards.py:2147] [0/0] [__guards] | +- GuardManager: source=L['b'], accessed_by=DictGetItemGuardAccessor(b)
V0727 00:09:20.987000 140387053281280 torch/_dynamo/guards.py:2147] [0/0] [__guards] | | +- TENSOR_MATCH: check_tensor(L['b'], Tensor, DispatchKeySet(CUDA, BackendSelect, ADInplaceOrView, AutogradCUDA), torch.float32, device=0, requires_grad=False, size=[10], stride=[1]) # if b.sum() < 0: # graph_break_analysis/conditional.py:7 in toy_example
V0727 00:09:20.987000 140387053281280 torch/_dynamo/guards.py:2147] [0/0] [__guards] | | +- NO_HASATTR: hasattr(L['b'], '_dynamo_dynamic_indices') == False # if b.sum() < 0: # graph_break_analysis/conditional.py:7 in toy_example
V0727 00:09:20.987000 140387053281280 torch/_dynamo/guards.py:2147] [0/0] [__guards] | | +- NO_TENSOR_ALIASING: check_no_aliasing(L['a'], L['b'])
V0727 00:09:20.987000 140387053281280 torch/_dynamo/guards.py:2147] [0/0] [__guards] | +- GuardManager: source=G, accessed_by=GlobalsGuardAccessor
V0727 00:09:20.987000 140387053281280 torch/_dynamo/guards.py:2147] [0/0] [__guards] | | +- GuardManager: source=G['torch'], accessed_by=DictGetItemGuardAccessor(torch)
V0727 00:09:20.987000 140387053281280 torch/_dynamo/guards.py:2147] [0/0] [__guards] | | | +- ID_MATCH: ___check_obj_id(G['torch'], 140387045584352) # x = a / (torch.abs(a) + 1) # graph_break_analysis/conditional.py:6 in toy_example
V0727 00:09:20.987000 140387053281280 torch/_dynamo/guards.py:2147] [0/0] [__guards] | | | +- GuardManager: source=G['torch'].abs, accessed_by=GetAttrGuardAccessor(abs)
V0727 00:09:20.987000 140387053281280 torch/_dynamo/guards.py:2147] [0/0] [__guards] | | | | +- ID_MATCH: ___check_obj_id(G['torch'].abs, 140387040598416) # x = a / (torch.abs(a) + 1) # graph_break_analysis/conditional.py:6 in toy_example
V0727 00:09:20.987000 140387053281280 torch/_dynamo/guards.py:2147] [0/0] [__guards]
V0727 00:09:20.987000 140387053281280 torch/_dynamo/convert_frame.py:1081] skipping: _fn (reason: in skipfiles, file: /media/abhishek/Abhishek_NVMe/shweta_machine/trace_analysis/pytorch-venv-front-end-unmodified/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py)
V0727 00:09:21.021000 140387053281280 torch/_dynamo/convert_frame.py:775] [1/0] torchdynamo start compiling torch_dynamo_resume_in_toy_example_at_7 /media/abhishek/Abhishek_NVMe/shweta_machine/trace_analysis/graph_break_analysis/conditional.py:7, stack (elided 6 frames):
V0727 00:09:21.021000 140387053281280 torch/_dynamo/convert_frame.py:775] [1/0] File "/media/abhishek/Abhishek_NVMe/shweta_machine/trace_analysis/graph_break_analysis/conditional.py", line 14, in <module>
V0727 00:09:21.021000 140387053281280 torch/_dynamo/convert_frame.py:775] [1/0] x = dynamo.explain(toy_example)(a, b)
V0727 00:09:21.021000 140387053281280 torch/_dynamo/convert_frame.py:775] [1/0] File "/media/abhishek/Abhishek_NVMe/shweta_machine/trace_analysis/pytorch-venv-front-end-unmodified/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 804, in inner
V0727 00:09:21.021000 140387053281280 torch/_dynamo/convert_frame.py:775] [1/0] opt_f(*args, **kwargs)
V0727 00:09:21.021000 140387053281280 torch/_dynamo/convert_frame.py:775] [1/0] File "/media/abhishek/Abhishek_NVMe/shweta_machine/trace_analysis/pytorch-venv-front-end-unmodified/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 432, in _fn
V0727 00:09:21.021000 140387053281280 torch/_dynamo/convert_frame.py:775] [1/0] return fn(*args, **kwargs)
V0727 00:09:21.021000 140387053281280 torch/_dynamo/convert_frame.py:775] [1/0] File "/media/abhishek/Abhishek_NVMe/shweta_machine/trace_analysis/graph_break_analysis/conditional.py", line 4, in toy_example
V0727 00:09:21.021000 140387053281280 torch/_dynamo/convert_frame.py:775] [1/0] @torch.compile(mode="reduce-overhead")
V0727 00:09:21.021000 140387053281280 torch/_dynamo/convert_frame.py:775] [1/0] File "/media/abhishek/Abhishek_NVMe/shweta_machine/trace_analysis/pytorch-venv-front-end-unmodified/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1115, in __call__
V0727 00:09:21.021000 140387053281280 torch/_dynamo/convert_frame.py:775] [1/0] return self._torchdynamo_orig_callable(
V0727 00:09:21.021000 140387053281280 torch/_dynamo/convert_frame.py:775] [1/0]
I0727 00:09:21.022000 140387053281280 torch/_dynamo/logging.py:55] [1/0] Step 1: torchdynamo start tracing torch_dynamo_resume_in_toy_example_at_7 /media/abhishek/Abhishek_NVMe/shweta_machine/trace_analysis/graph_break_analysis/conditional.py:7
V0727 00:09:21.022000 140387053281280 torch/fx/experimental/symbolic_shapes.py:2530] [1/0] create_env
V0727 00:09:21.023000 140387053281280 torch/_dynamo/symbolic_convert.py:774] [1/0] [__trace_source] TRACE starts_line /media/abhishek/Abhishek_NVMe/shweta_machine/trace_analysis/graph_break_analysis/conditional.py:7 in torch_dynamo_resume_in_toy_example_at_7 (toy_example)
V0727 00:09:21.023000 140387053281280 torch/_dynamo/symbolic_convert.py:774] [1/0] [__trace_source] if b.sum() < 0:
V0727 00:09:21.023000 140387053281280 torch/_dynamo/symbolic_convert.py:797] [1/0] [__trace_bytecode] TRACE JUMP_ABSOLUTE 32 []
V0727 00:09:21.023000 140387053281280 torch/_dynamo/symbolic_convert.py:774] [1/0] [__trace_source] TRACE starts_line /media/abhishek/Abhishek_NVMe/shweta_machine/trace_analysis/graph_break_analysis/conditional.py:8 in torch_dynamo_resume_in_toy_example_at_7 (toy_example)
...
Note the line in the above debug log:
V0727 00:09:20.962000 140387053281280 torch/_dynamo/output_graph.py:971] [0/0] COMPILING GRAPH due to GraphCompileReason(reason='generic_jump TensorVariable()', user_stack=[<FrameSummary file /media/abhishek/Abhishek_NVMe/shweta_machine/trace_analysis/graph_break_analysis/conditional.py, line 7 in toy_example>], graph_break=True)
Is there a way to get just the above line for a PyTorch program using torch.compile
using some environment variable (export TORCH_LOGS="+dynamo"
prints the information, but it does so along with a lot of other stuff)?