Error: module multiprocessing.util' has no attribute '_flush_std_streams

Steps to reproduce the behavior:

  1. I followed the tutorial code: https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
  2. run without change anything, but error comes out. even I download the official code, error still exist.
  3. seems like cannot enumerate trainloader. once do enumerate(trainloader), error appears: “module multiprocessing.util’ has no attribute '_flush_std_streams”

Expected behavior

Environment

PyTorch version: 0.4.1
Is debug build: No
CUDA used to build PyTorch: 8.0.61

OS: Ubuntu 16.04 LTS
GCC version: (Ubuntu 4.9.3-13ubuntu2) 4.9.3
CMake version: version 3.5.1

Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: Tesla P100-PCIE-16GB
GPU 1: Tesla P100-PCIE-16GB
GPU 2: Tesla P100-PCIE-16GB
GPU 3: Tesla P100-PCIE-16GB
GPU 4: Tesla P100-PCIE-16GB
GPU 5: Tesla P100-PCIE-16GB
GPU 6: Tesla P100-PCIE-16GB
GPU 7: Tesla P100-PCIE-16GB

Nvidia driver version: 384.90
cuDNN version: Probably one of the following:
/usr/local/MATLAB/R2016b/bin/glnxa64/libcudnn.so.4.0.7

Versions of relevant libraries:
[pip] numpy (1.15.4)
[pip] torch (0.4.1)
[pip] torchvision (0.2.1)
[conda] cuda80 1.0 h205658b_0 pytorch
[conda] pytorch 0.4.1 py36_cuda8.0.61_cudnn7.1.2_1 [cuda80] pytorch
[conda] torchvision 0.2.1 py36_1 pytorch

Additional context

AttributeError Traceback (most recent call last)
in
21 plt.title(‘Batch from dataloader’)
22
—> 23 for i_batch, sample_batched in enumerate(dataloader):
24 print(i_batch, sample_batched[‘image’].size(),
25 sample_batched[‘landmarks’].size())

~/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/dataloader.py in iter (self)
499
500 def iter (self):
–> 501 return _DataLoaderIter(self)
502
503 def len (self):

~/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/dataloader.py in init (self, loader)
287 for w in self.workers:
288 w.daemon = True # ensure that the worker exits on process exit
–> 289 w.start()
290
291 _update_worker_pids(id(self), tuple(w.pid for w in self.workers))

~/anaconda3/envs/pytorch/lib/python3.6/multiprocessing/process.py in start(self)
103 ‘daemonic processes are not allowed to have children’
104 _cleanup()
–> 105 self._popen = self._Popen(self)
106 self._sentinel = self._popen.sentinel
107 # Avoid a refcycle if the target function holds an indirect

~/anaconda3/envs/pytorch/lib/python3.6/multiprocessing/context.py in _Popen(process_obj)
221 @staticmethod
222 def _Popen(process_obj):
–> 223 return _default_context.get_context().Process._Popen(process_obj)
224
225 class DefaultContext(BaseContext):

~/anaconda3/envs/pytorch/lib/python3.6/multiprocessing/context.py in _Popen(process_obj)
275 def _Popen(process_obj):
276 from .popen_fork import Popen
–> 277 return Popen(process_obj)
278
279 class SpawnProcess(process.BaseProcess):

~/anaconda3/envs/pytorch/lib/python3.6/multiprocessing/popen_fork.py in init (self, process_obj)
15
16 def init (self, process_obj):
—> 17 util._flush_std_streams()
18 self.returncode = None
19 self._launch(process_obj)

AttributeError: module ‘multiprocessing.util’ has no attribute ‘_flush_std_streams’

As answered in this thread it seems to be an issue of the Python library multiprocessing.
Could you try to update or reinstall it?