Why does modifying the training code affect processes that have already started training (Windows)?

For example, given the following code:

from torch.utils.data import DataLoader
from torchvision.datasets import FakeData
from torchvision.transforms import ToTensor


dataset = FakeData(size = 2000, image_size = (3, 32, 32), num_classes = 1000, transform = ToTensor())
dataloader = DataLoader(dataset, batch_size = 10, num_workers = 4)

def train_batch(batch:int, images, labels):
    ...

def train_epoch(epoch:int):
    print(f'training ... epoch {epoch}')
    for batch, (images, labels) in enumerate(dataloader):
        train_batch(batch, images, labels)

def train():
    for epoch in range(0, 300):
        train_epoch(epoch)

def main():
    for _ in range(0, 10):
        if _:               
            ... # comment this line instantly after "training ..." printed
        else:
            ...
        train()

if __name__ == '__main__':
    main()

I commented the line between if and else instantly after training ... printed, only to find the process stopped because of the indent error. Why did this happen?