Dataloader spawn re-imports scripts and crashes

a problem that i’m facing for a while, when running an experiment and simultaneously modifying the scripts that run the experiment, it happens that the interpreter will go through the script again while i’m rewriting it and crash.

i thought i solved it when i upgraded pytorch and started using persistent workers = True but i’m still experiencing this error.

Traceback (most recent call last):
  File "D:\ProgramData\Anaconda3\envs\env_zoo\lib\multiprocessing\spawn.py", line 114, in _main
    prepare(preparation_data)
  File "D:\ProgramData\Anaconda3\envs\env_zoo\lib\multiprocessing\spawn.py", line 225, in prepare
    _fixup_main_from_path(data['init_main_from_path'])
  File "D:\ProgramData\Anaconda3\envs\env_zoo\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
    run_name="__mp_main__")
  File "D:\ProgramData\Anaconda3\envs\env_zoo\lib\runpy.py", line 263, in run_path
    pkg_name=pkg_name, script_name=fname)
  File "D:\ProgramData\Anaconda3\envs\env_zoo\lib\runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "D:\ProgramData\Anaconda3\envs\env_zoo\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "D:\users\Gony\FMRI_inpainting1\examples\bert_fine_tune.py", line 5, in <module>
    import lib.train as train
  File "D:\users\Gony\FMRI_inpainting1\lib\train\__init__.py", line 4, in <module>
    from .trainer_gony import Trainer_gony
  File "D:\users\Gony\FMRI_inpainting1\lib\train\trainer_gony.py", line 76
    self.writer.update_tensorboard(grads=None,initial_val_loss)
                                             ^
SyntaxError: positional argument follows keyword argument
 19%|█▉        | 999/5160 [1:16:06<5:16:58,  4.57s/it]
Traceback (most recent call last):
  File "D:\ProgramData\Anaconda3\envs\env_zoo\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
    reduction.dump(process_obj, to_child)
  File "D:\ProgramData\Anaconda3\envs\env_zoo\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe

I’m not sure, if this is a PyTorch-specific issue or rather a Python multiprocessing limitation or if it depends on the used Python interpreter/IDE.
Based on the description it seems that your Python interpreter reloads the source files after each change or at a specific interval.
Based on this post it also seems to be a Windows-specific behavior.