PyTorch compile requires fixed input naming?


My model inference has two steps, and each step digests input tensors with different naming: def forward(input, step), let’s say the first step takes in an input tensor named f’input_{step}'(i.e.input_1), and second step takes in an input tensor named input_2. When I run first inference(with torch.compile) using the input_1 as naming, and when I run second step, the model seems to automatically assume the input tensor naming should be input_1, and overwrite the input_2. So my question is: should we always keep a consistent naming for a compiled model?


Could you share a code snippet for whatt you mean exactly? Are the inputs the same shape? If not then dynamic=True should help but otherwise a JIT compiler will typically specialize on properties of the input so would just like to understand a bit better what you’re trying to do

Thanks for the reply. Yes the input shapes are the same, here I attached the code to replicate the issue:

import torch

class MyModule(torch.nn.Module):
    def __init__(self):
        self.lin = torch.nn.Linear(100, 10)

    def forward(self, x, step):
        return {f'output_{step}': self.lin(x[f'input_{step}'])}

mod = MyModule()
opt_mod = torch.compile(mod)

my_input = {
    'input_0': torch.ones([100]),
    'input_1': torch.ones([100])

for step in range(2):
    output = opt_mod(my_input, step)

My expected output would be:


But the actual output from the code above is:


where the second key name is changed from output_1 to output_0.

Interesting, I x-posted this on github since I’m not sure if this a bug or intended behavior torch.compile specializes on input name · Issue #103099 · pytorch/pytorch · GitHub