Torchscript executing static graph with different shaped tensors inputs does not trigger recompilation (retracing)

Hello,

I am new to torchscript and playing around with it. I have some experience in TensorFlow and try to compare those two frameworks.

Here is an example I am playing with:

import torch 

class Test(torch.nn.Module):
    def __init__(self) -> None:
        super(Test, self).__init__()
    
    def forward(self, input):
        res = torch.argwhere(input)
        return res

x = torch.tensor([1, 2, 3])
y = torch.tensor([0, 1, 2, 3, 5])

t = Test()
traced_t = torch.jit.trace(Test(),(x))

print(traced_t(x))
print(traced_t(y))

Here x and y have different sizes.
I trace Module Test with x as an example input. Then I try to execute the traced graph with example x and y. Because y have a different size than x, I expect the graph to be retraced or recompiled as it will do in TensorFlow’ static graph. However, I did not observe any recompilation happen.

Can anyone points out:

  1. When I run the statement traced_t(y), is the graph recompiled?
  2. If it is not recompiled, how is it achieved (compared with TensorFlow’s static graph) ?

Best,
Cijie