Data loader crashes during training. Something to do with multiprocessing in docker

I also have the same problem.

Traceback (most recent call last):
  File "train.py", line 20, in <module>
    for i, data in enumerate(dataset):
  File "/home/elias/.local/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 195, in __next__
    idx, batch = self.data_queue.get()
  File "/usr/lib/python3.5/multiprocessing/queues.py", line 345, in get
    return ForkingPickler.loads(res)
  File "/home/elias/.local/lib/python3.5/site-packages/torch/multiprocessing/reductions.py", line 70, in rebuild_storage_fd
    fd = df.detach()
  File "/usr/lib/python3.5/multiprocessing/resource_sharer.py", line 57, in detach
    with _resource_sharer.get_connection(self._id) as conn:
  File "/usr/lib/python3.5/multiprocessing/resource_sharer.py", line 87, in get_connection
    c = Client(address, authkey=process.current_process().authkey)
  File "/usr/lib/python3.5/multiprocessing/connection.py", line 487, in Client
    c = SocketClient(address)
  File "/usr/lib/python3.5/multiprocessing/connection.py", line 614, in SocketClient
    s.connect(address)
ConnectionRefusedError: [Errno 111] Connection refused

I don’t use conda or miniconda, maybe that rules something out?