Torvision model multiprocessing : pickle.PickleError during inference


I’m trying to run inference using Pytorch Multiprocessing and torchvision fasterrcnn model. Unfortunately have some trouble using it. I would like some help on this please.

Here is the mp part:

        processes = []
        for data_dict in list_data_dicts:
            # defects_coordinates = detector.detection_pipeline(args.imgpath)
            p = Process(target=self.pipe, args=(data_dict, ))
        for process in processes:

My model is loaded as follow:

def load_model_weights(self):
        model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
        in_features = model.roi_heads.box_predictor.cls_score.in_features
        model.roi_heads.box_predictor = (
            torchvision.models.detection.faster_rcnn.FastRCNNPredictor(in_features, self.num_classes)
        checkpoint = torch.load(self.model_path, map_location=self.device)
        return model

here is my import:

from torch.multiprocessing import Pool, Process, set_start_method
#torch.backends.cudnn.benchmark = True
except RuntimeError:

And finally, the error:

Traceback (most recent call last):
  File "", line 57, in <module>
    defects_coordinates = detector.detection_pipeline(imgs)
  File "/data/fis/pytorch-rcnn/inference/", line 70, in detection_pipeline
  File "/home/ansible/miniconda3/lib/python3.7/multiprocessing/", line 112, in start
    self._popen = self._Popen(self)
  File "/home/ansible/miniconda3/lib/python3.7/multiprocessing/", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "/home/ansible/miniconda3/lib/python3.7/multiprocessing/", line 284, in _Popen
    return Popen(process_obj)
  File "/home/ansible/miniconda3/lib/python3.7/multiprocessing/", line 32, in __init__
  File "/home/ansible/miniconda3/lib/python3.7/multiprocessing/", line 20, in __init__
  File "/home/ansible/miniconda3/lib/python3.7/multiprocessing/", line 47, in _launch
    reduction.dump(process_obj, fp)
  File "/home/ansible/miniconda3/lib/python3.7/multiprocessing/", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
  File "/data/fis/pytorch-rcnn/env-rcnn/lib/python3.7/site-packages/torch/jit/", line 1648, in __getstate__
    "Mixed serialization of script and non-script modules is not supported. " +
_pickle.PickleError: ScriptModules cannot be deepcopied using copy.deepcopy or saved using Mixed serialization of script and non-script modules is not supported. For purely script modules use<filename>) instead.

If someone already know how to handle this, I would help me a lot.
Thank you,


It works only if I reload my model for each process (In the self.pipe), but I want to avoid doing this.
Then I also try to copy.deepcopy the model, but this one doesn’t work too.