TorchDynamo - Running multiple times with the same script, why do I get different subgraph partitions

Running multiple times with the same script(for example, training.py etc.), why do I get different subgraph partitions