Pytorch retinanet - multi gpu issue

Training runs on a server with a single gpu 1080ti however it isn’t running on another workstation with 2 rtx 2080 ti nvlink with the error:

Traceback (most recent call last):
File “”, line 1, in
File “/opt/conda/lib/python3.6/multiprocessing/”, line 105, in spawn_main
exitcode = _main(fd)
File “/opt/conda/lib/python3.6/multiprocessing/”, line 114, in _main
File “/opt/conda/lib/python3.6/multiprocessing/”, line 225, in prepare
File “/opt/conda/lib/python3.6/multiprocessing/”, line 277, in _fixup_main_from_path
File “/opt/conda/lib/python3.6/”, line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File “/opt/conda/lib/python3.6/”, line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File “/opt/conda/lib/python3.6/”, line 85, in _run_code
exec(code, run_globals)
File “/home/elib/Dev/Retinanet_PT/”, line 39, in
File “/home/elib/Dev/Retinanet_PT/retinanet/”, line 185, in main
torch.multiprocessing.spawn(worker, args=(args, world, model, state), nprocs=world)
File “/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/”, line 158, in spawn
File “/opt/conda/lib/python3.6/multiprocessing/”, line 105, in start
self._popen = self._Popen(self)
File “/opt/conda/lib/python3.6/multiprocessing/”, line 284, in _Popen
return Popen(process_obj)
File “/opt/conda/lib/python3.6/multiprocessing/”, line 32, in init
File “/opt/conda/lib/python3.6/multiprocessing/”, line 19, in init
File “/opt/conda/lib/python3.6/multiprocessing/”, line 42, in _launch
prep_data = spawn.get_preparation_data(process_obj._name)
File “/opt/conda/lib/python3.6/multiprocessing/”, line 143, in get_preparation_data
File “/opt/conda/lib/python3.6/multiprocessing/”, line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.’’’)
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

I figured out:
a script that goes to multiple gpu has to include main. Works now