BrokenPipeError: [Errno 32] Broken pipe When i :"run"

Hi Soumith,

I download “"from
When I run this file in Spyder on Windows10, it reports"BrokenPipeError: [Errno 32] Broken pipe”, the details information are as follows:

runfile(‘D:/PyTorch/Code/LearnPyTorch/’, wdir=‘D:/PyTorch/Code/LearnPyTorch’)
Files already downloaded and verified
Files already downloaded and verified
Traceback (most recent call last):
File “”, line 1, in
runfile(‘D:/PyTorch/Code/LearnPyTorch/’, wdir=‘D:/PyTorch/Code/LearnPyTorch’)
File “D:\Anaconda\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.p y”, line 880, in runfile
execfile(filename, namespace)
File “D:\Anaconda\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.p y”, line 102, in execfile
exec(compile(, filename, ‘exec’), namespace)
File “D:/PyTorch/Code/LearnPyTorch/”, line 99, in
dataiter = iter(trainloader)
File “D:\Anaconda\Anaconda3\lib\site-packages\torch\utils\data\dataloader.p y”, line 303, in iter
return DataLoaderIter(self)
File “D:\Anaconda\Anaconda3\lib\site-packages\torch\utils\data\dataloader.p y”, line 162, in init
File “D:\Anaconda\Anaconda3\lib\multiprocessing\process.p y”, line 105, in start
self._popen = self._Popen(self)
File “D:\Anaconda\Anaconda3\lib\multiprocessing\context.p y”, line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File “D:\Anaconda\Anaconda3\lib\multiprocessing\context.p y”, line 322, in _Popen
return Popen(process_obj)
File “D:\Anaconda\Anaconda3\lib\multiprocessing\popen_spawn_win32.p y”, line 65, in init
reduction.dump(process_obj, to_child)
File “D:\Anaconda\Anaconda3\lib\multiprocessing\reduction.p y”, line 60, in dump
ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe

Could you give me any advise to help me run “” successfully? By the way, other examples in “Deep Learning with PyTorch: A 60 Minute Blitz” have been run suceessfully.


Found a solution that works for me on:


Hi @keloli!

The problem has to do with multiprocessing, DataLoader class and Windows broadly, but I’m not familiar with the details. What helped me was to set the num_workers parameter to either 0 or 1 with the data loaders.

Original (not working):

# ... code ...
trainloader =, batch_size=4,
                                          shuffle=True, num_workers=2)
# ... code ...
testloader =, batch_size=4,
                                         shuffle=False, num_workers=2)
# ... code ...

Modified (working):

# ... code ...
trainloader =, batch_size=4,
                                          shuffle=True, num_workers=0)
# ... code ...
testloader =, batch_size=4,
                                         shuffle=False, num_workers=0)
# ... code ...

However, the real way around this problem lies in re-factoring your code to comply to Python’s Windows-specific multiprocessing guidelines as discussed here in this StackOverflow thread.

This subject is touched upon in Python 2 documentation for multiprocessing: Programming Guidelines, Windows. While Python 3 documentation shares similar guidelines (see here), the Python 2 is more explicit with Windows:

Make sure that the main module can be safely imported by a new Python interpreter without causing unintended side effects (such a starting a new process).

In short, the the idea here would be to wrap the example code inside an if __name__ == '__main__' statement as follows:

# Deep Learning with PyTorch: A 60 Minute Blitz » Training a classifier 
# Load the CIFAR10 data
import torch
import torchvision
import torchvision.transforms as transforms

# Safe DataLoader multiprocessing with Windows
if __name__ == '__main__':
    # Code to load the data with num_workers > 1

While the tutorial seems to define multiple scripts, the best way around this would be to wrap all operations in functions and then call them inside an if __name__ == '__main__' clause:

# Imports for dataset generation, training, etc

def load_datasets(...):
    # Code to load the datasets with multiple workers

def train(...):
    # Code to train the model

if __name__ == '__main__':

Hi @karmus89!
I’m so appreciative for your solution,i will try this later!

I meet the same problems,your solution is so good.:+1:

Yes, this can solve the problem and thank you very much!

It’s work, thank you!

thank you so much friend. i really stuck on that problem and cant find any solution. but this work. thank you so much…

Thanks for your solve karmus89!

Oh thank you it solved my problem too.

@karmus89 reply to set number of workers to 0 is to not using multiprocessing while @Julia_mdr 's answer can help using multiprocess in win by just adding if name=“main” to the code. Correct me if I am wrong.

I’m experiencing this problem in windows 10 but in Jupyter notebook, does anyone have a solution for this?