runfile(‘D:/PyTorch/Code/LearnPyTorch/cifar10_tutorial.py’, wdir=‘D:/PyTorch/Code/LearnPyTorch’)
Files already downloaded and verified
Files already downloaded and verified
Traceback (most recent call last):
File “”, line 1, in
runfile(‘D:/PyTorch/Code/LearnPyTorch/cifar10_tutorial.py’, wdir=‘D:/PyTorch/Code/LearnPyTorch’)
File “D:\Anaconda\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.p y”, line 880, in runfile
execfile(filename, namespace)
File “D:\Anaconda\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.p y”, line 102, in execfile
exec(compile(f.read(), filename, ‘exec’), namespace)
File “D:/PyTorch/Code/LearnPyTorch/cifar10_tutorial.py”, line 99, in
dataiter = iter(trainloader)
File “D:\Anaconda\Anaconda3\lib\site-packages\torch\utils\data\dataloader.p y”, line 303, in iter
return DataLoaderIter(self)
File “D:\Anaconda\Anaconda3\lib\site-packages\torch\utils\data\dataloader.p y”, line 162, in init
w.start()
File “D:\Anaconda\Anaconda3\lib\multiprocessing\process.p y”, line 105, in start
self._popen = self._Popen(self)
File “D:\Anaconda\Anaconda3\lib\multiprocessing\context.p y”, line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File “D:\Anaconda\Anaconda3\lib\multiprocessing\context.p y”, line 322, in _Popen
return Popen(process_obj)
File “D:\Anaconda\Anaconda3\lib\multiprocessing\popen_spawn_win32.p y”, line 65, in init
reduction.dump(process_obj, to_child)
File “D:\Anaconda\Anaconda3\lib\multiprocessing\reduction.p y”, line 60, in dump
ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe
Could you give me any advise to help me run “cifar10_tutorial.py” successfully? By the way, other examples in “Deep Learning with PyTorch: A 60 Minute Blitz” have been run suceessfully.
The problem has to do with multiprocessing, DataLoader class and Windows broadly, but I’m not familiar with the details. What helped me was to set the num_workers parameter to either 0 or 1 with the data loaders.
However, the real way around this problem lies in re-factoring your code to comply to Python’s Windows-specific multiprocessing guidelines as discussed here in this StackOverflow thread.
This subject is touched upon in Python 2 documentation for multiprocessing: Programming Guidelines, Windows. While Python 3 documentation shares similar guidelines (see here), the Python 2 is more explicit with Windows:
Make sure that the main module can be safely imported by a new Python interpreter without causing unintended side effects (such a starting a new process).
In short, the the idea here would be to wrap the example code inside an if __name__ == '__main__' statement as follows:
# Deep Learning with PyTorch: A 60 Minute Blitz » Training a classifier
# Load the CIFAR10 data
import torch
import torchvision
import torchvision.transforms as transforms
# Safe DataLoader multiprocessing with Windows
if __name__ == '__main__':
# Code to load the data with num_workers > 1
While the tutorial seems to define multiple scripts, the best way around this would be to wrap all operations in functions and then call them inside an if __name__ == '__main__' clause:
# Imports for dataset generation, training, etc
def load_datasets(...):
# Code to load the datasets with multiple workers
def train(...):
# Code to train the model
if __name__ == '__main__':
load_datasets()
train()
@karmus89 reply to set number of workers to 0 is to not using multiprocessing while @Julia_mdr 's answer can help using multiprocess in win by just adding if name=“main” to the code. Correct me if I am wrong.