[torch.export] How to disable sym_size_int?

Hello,
I noticed that from torch 2.5, torch.ops.aten.sym_size_int is introduced to improve the coverage of dynamic shape support during torch._dynamo.export tracing.

For example, if the torch code is written as batch_size = x.size()[0], the corresponding graph module traced through torch._dynamo.export with tracing_mode = "symbolic" resulted in the node
sym_size_int = torch.ops.aten.sym_size.int(l_x_, 0) in torch 2.5, while in torch 2.4 the node was expressed as size = l_x.size(); getitem_1 = size[0] in torch 2.4.

I am wondering is there a specific way to prevent the occurrence of sym_size_int in torch 2.5. The problem I am encountering is that the sym_size_int is used globally used in all the submodules. For example, given a LLM with N decoder layers, each of which is a submodule of the given LLM and contains the code batch_size = hidden_state.size()[0]; reshaped_hidden_state = hidden_state.view(batch_size, -1), the captured graph in torch 2.5 contains reshaped_hidden_state_n = hidden_state_n.view(sym_size_int, -1) for all the N layers, n=0,1,…,N-1.

I am wondering if there is a way to express the graph as sym_size_int_n = torch.ops.aten.sym_size.int(hidden_state_n, 0); reshaped_hidden_state_n = hidden_state.view(sym_size_int_n, -1), i.e., enforce the calculation of tensor size in each decoder layer.

Thanks in advance!

What’s the problem? That the sym_size_int is being reused for each of the N layers?

Yes, I have been implementing functions that transform fx.Graph in a decoder layer-wise manner. The functions work fine with torch 2.4 but not in torch 2.5 due to the issue described above. If this is something that will be enforced in torch > 2.5, I am going to change the implementations but was wondering if there is anyway to revert to the original graph format.