From pytorch tutorial，I found the following paragraph:
import torch.nn as nn class DataParallelModel(nn.Module): def __init__(self): super().__init__() self.block1 = nn.Linear(10, 20) # wrap block2 in DataParallel self.block2 = nn.Linear(20, 20) self.block2 = nn.DataParallel(self.block2) self.block3 = nn.Linear(20, 20) def forward(self, x): x = self.block1(x) x = self.block2(x) x = self.block3(x) return x
The code does not need to be changed in CPU-mode.
The documentation for DataParallel is here.
Primitives on which DataParallel is implemented upon:
However, I do not understand what does “The code does not need to be changed in CPU-mode” mean? I am trying to run a model which loads the pre-trained ‘resnet152’ model and applies ‘DataParallel()’ function on the pre-trained ‘resnet152’ model, and then replaced its last fc layer with a new one. It works well with GPU. However, on a CPU-ony machine, it goes wrong. I think the error is due to the existence of ‘DataParallel()’ function, but the tuotrial says the code does not need to be changed? Also, I do not know how to swith the model to CPU-mode.
File “/usr/local/intel/intelpython2/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py”, line 47, in __init__
output_device = device_ids
IndexError: list index out of range