With the PyTorch Distributed training model, code changes affect running applications

I noticed two strange things,
First, when I used Distributed for training, I modified the code. After a while, the application reported an error when RUNNING because of my modification.
Second, I added print to the code, and after a while, the program that was already running will start to print accordingly