_MultiProcessingDataLoaderIter error when num_workers>0 in Debug mode

’_MultiProcessingDataLoaderIter’ erro when num_workers>0

environment: remote Linux+ Pycharm IDE + PyTorch 1.8.0

When I tried to debug my code with Pycharm IDE, there was a bug showing that _MultiProcessingDataLoaderIter error.
Then I found that when I set ‘num_workers_per_gpu’ > 0, such bug would happen.
There is no bug in my code, cause when I directly ran it, there is no any error.
Such bug just happened when only Debug mode. And when I set num_workers_per_gpu =0, there was no bug in any mode. It looks like so strange:
(1) num_workers_per_gpu =0,
debug mode :white_check_mark:
run mode: :white_check_mark:
(2) num_workers_per_gpu >0,
debug mode: :x:
run mode: :white_check_mark:

It seems when Debug mode, the following loop would break uncorrectly (the condition of while loop here would set false):

    while self._rcvd_idx < self._send_idx:
                info = self._task_info[self._rcvd_idx]
                worker_id = info[0]
                if len(info) == 2 or self._workers_status[worker_id]:  # has data or is still active
                    break
                del self._task_info[self._rcvd_idx]
                self._rcvd_idx += 1
    else:
                # no valid `self._rcvd_idx` is found (i.e., didn't break)
                if not self._persistent_workers:
                    self._shutdown_workers()
                raise StopIteration

I do not know how to solve it. And I checked torch.batchSampler, the reason that conditon becomes false could be that my sampler catches no data yet. But it just my own guess.

cc @VitalyFedyunin for data loader questions

Hi! Do you have actual error trace?