Why does multiprocessing fails but not Threading.thread (Mixed serialization of script and non-script modules is not supported.)?

I am trying to invoke a separate thread, running in parallel to the actual script, but each time, the script is about to execute its main method, it fails with this error :

_init_model took 13.0407 sec or 13040.70 ms
Traceback (most recent call last):
  File "c:\Users\User\Anaconda3\Lib\site-packages\QRTC\QRTC_testbed.py", line 49, in <module>
    start()
  File "c:\Users\User\Anaconda3\Lib\site-packages\QRTC\QRTC_testbed.py", line 27, in start
    QRTC = FaceVerification(**cfg['Kara']['ARGS'], threshold=65)
  File "C:\Users\User\Anaconda3\lib\site-packages\QRTC\Core.py", line 150, in __init__
    self.dispatcher_executer.start()
  File "C:\Users\User\Anaconda3\lib\multiprocessing\process.py", line 112, in start
    self._popen = self._Popen(self)
  File "C:\Users\User\Anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Users\User\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)
  File "C:\Users\User\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__
    reduction.dump(process_obj, to_child)
  File "C:\Users\User\Anaconda3\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
  File "C:\Users\User\Anaconda3\lib\site-packages\torch\jit\__init__.py", line 1755, in __getstate__
    "Mixed serialization of script and non-script modules is not supported. " +
_pickle.PickleError: ScriptModules cannot be deepcopied using copy.deepcopy or saved using torch.save. Mixed serialization of script and non-script modules is not supported. For purely script modules use my_script_module.save(<filename>) instead.
Destructor called!
PS C:\Users\User\Anaconda3\Lib\site-packages\QRTC> C:\Users\User\Anaconda3\lib\site-packages\QRTC\Core.py:367: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  def _save_jit_optimized_model(self, dummy_input=torch.tensor(torch.rand(size=(1,3,112,112)))):
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Users\User\Anaconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main
    exitcode = _main(fd)
  File "C:\Users\User\Anaconda3\lib\multiprocessing\spawn.py", line 115, in _main
    self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

All I did was simply to instatiate a new multiprocessing.porcess in the constructor, after the _init_model is done . sth like this :

class QRTC():
    def __init__(self, arg1, arg2, etc):
        .....
        self._init_model()
        # after all is done, instantiate this
        self.dispatcher_executer = torch_multiprocessing.multiprocessing.Process(target=self.execute_dispatcher_callbacks) 
        self.dispatcher_executer.start()

Why am I seeing this, in self.execute_dispacther_callbacks I’m dealing with pure python and nothing torch related, so its very wierd to me why this is happenning.
The intresting part here is that using Threading.thread doesnt produce any errors at all!
By the way, if do the initialization after I instantiate and start my process, everything seem to be fine!

Any help is greatly appreciated

After nearly the whole day I found out the Python’s multiprocessing module doesnt work with classes and basically it requires all kind of shenanginas to get a simple class example to work and even after such endivours 99.99% of the cases will fail or give you extreme headaches.
The good thing is there is a library named pathos which seems to be class friendly and accepts, class objects. it uses dill for serialization instead of pickle and thus it works out of the box!
However, in Pytorch, it seems it uses pickle and even pathos fails becsaue of the pickle !

PS C:\Users\User\Anaconda3\Lib\site-packages\FV>  ${env:DEBUGPY_LAUNCHER_PORT}='54384'; & 'C:\Users\User\Anaconda3\python.exe' 'c:\Users\User\.vscode\extensions\ms-python.python-2020.4.76186\pythonFiles\lib\python\debugpy\wheels\debugpy\launcher' 'c:\Users\User\Anaconda3\Lib\site-packages\FV\fv_testbed.py'
C:\Users\User\Anaconda3\lib\site-packages\FV\F_V.py:479: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  def _save_jit_optimized_model(self, dummy_input=torch.tensor(torch.rand(size=(1,3,112,112)))):
c:\Users\User\Anaconda3\Lib\site-packages\FV\fv_testbed.py:27: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  cfg = yaml.load(config)
C:\Users\User\Anaconda3\lib\site-packages\FV\F_V.py:351: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  cfg = yaml.load(config)
Loading pretrained model from C:/Users/User/Anaconda3/Lib/site-packages/FV/Model_Zoo/RETINAFACE/mobilenet0.25_Final.pth
remove prefix 'module.'
C:\Users\User\Anaconda3\lib\site-packages\FV\F_V.py:231: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  cfg = yaml.load(config)
jit is enabled
postfix: r18_jit
Loading pretrained model from C:/Users/User/Anaconda3/Lib/site-packages/FV/Model_Zoo/RETINAFACE/mobilenet0.25_Final.pth
remove prefix 'module.'
building embedding...
building embedding...
building embedding...
an exception has occured: ('list index out of range',)
building embedding...
_init_model took 13.0786 sec or 13078.56 ms
Traceback (most recent call last):
  File "c:\Users\User\Anaconda3\Lib\site-packages\FV\fv_testbed.py", line 54, in <module>
    start()
  File "c:\Users\User\Anaconda3\Lib\site-packages\FV\fv_testbed.py", line 32, in start
    fv.init_dispatcher()
  File "C:\Users\User\Anaconda3\lib\site-packages\FV\F_V.py", line 431, in init_dispatcher
    self._worker.map(self.execute_dispatcher_callbacks,[''])
  File "C:\Users\User\Anaconda3\lib\site-packages\pathos\multiprocessing.py", line 137, in map
    return _pool.map(star(f), zip(*args)) # chunksize
  File "C:\Users\User\Anaconda3\lib\site-packages\multiprocess\pool.py", line 268, in map
    return self._map_async(func, iterable, mapstar, chunksize).get()
  File "C:\Users\User\Anaconda3\lib\site-packages\multiprocess\pool.py", line 657, in get
    raise self._value
  File "C:\Users\User\Anaconda3\lib\site-packages\multiprocess\pool.py", line 431, in _handle_tasks
    put(task)
  File "C:\Users\User\Anaconda3\lib\site-packages\multiprocess\connection.py", line 209, in send
    self._send_bytes(_ForkingPickler.dumps(obj))
  File "C:\Users\User\Anaconda3\lib\site-packages\multiprocess\reduction.py", line 54, in dumps
    cls(buf, protocol, *args, **kwds).dump(obj)
  File "C:\Users\User\Anaconda3\lib\site-packages\dill\_dill.py", line 445, in dump
    StockPickler.dump(self, obj)
  File "C:\Users\User\Anaconda3\lib\pickle.py", line 437, in dump
    self.save(obj)
  File "C:\Users\User\Anaconda3\lib\pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Users\User\Anaconda3\lib\pickle.py", line 789, in save_tuple
    save(element)
  File "C:\Users\User\Anaconda3\lib\pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Users\User\Anaconda3\lib\pickle.py", line 774, in save_tuple
    save(element)
  File "C:\Users\User\Anaconda3\lib\pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Users\User\Anaconda3\lib\pickle.py", line 774, in save_tuple
    save(element)
  File "C:\Users\User\Anaconda3\lib\pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Users\User\Anaconda3\lib\site-packages\dill\_dill.py", line 1413, in save_function
    obj.__dict__, fkwdefaults), obj=obj)
  File "C:\Users\User\Anaconda3\lib\pickle.py", line 638, in save_reduce
    save(args)
  File "C:\Users\User\Anaconda3\lib\pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Users\User\Anaconda3\lib\pickle.py", line 789, in save_tuple
    save(element)
  File "C:\Users\User\Anaconda3\lib\pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Users\User\Anaconda3\lib\pickle.py", line 774, in save_tuple
    save(element)
  File "C:\Users\User\Anaconda3\lib\pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Users\User\Anaconda3\lib\site-packages\dill\_dill.py", line 1147, in save_cell
    pickler.save_reduce(_create_cell, (f,), obj=obj)
  File "C:\Users\User\Anaconda3\lib\pickle.py", line 638, in save_reduce
    save(args)
  File "C:\Users\User\Anaconda3\lib\pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Users\User\Anaconda3\lib\pickle.py", line 774, in save_tuple
    save(element)
  File "C:\Users\User\Anaconda3\lib\pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
    pickler.save_reduce(MethodType, (obj.__func__, obj.__self__), obj=obj)
  File "C:\Users\User\Anaconda3\lib\pickle.py", line 638, in save_reduce
    save(args)
  File "C:\Users\User\Anaconda3\lib\pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Users\User\Anaconda3\lib\pickle.py", line 774, in save_tuple
    save(element)
  File "C:\Users\User\Anaconda3\lib\pickle.py", line 549, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Users\User\Anaconda3\lib\pickle.py", line 662, in save_reduce
    save(state)
  File "C:\Users\User\Anaconda3\lib\pickle.py", line 504, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Users\User\Anaconda3\lib\site-packages\dill\_dill.py", line 912, in save_module_dict
    StockPickler.save_dict(pickler, obj)
  File "C:\Users\User\Anaconda3\lib\pickle.py", line 859, in save_dict
    self._batch_setitems(obj.items())
  File "C:\Users\User\Anaconda3\lib\pickle.py", line 885, in _batch_setitems
    save(v)
  File "C:\Users\User\Anaconda3\lib\pickle.py", line 524, in save
    rv = reduce(self.proto)
  File "C:\Users\User\Anaconda3\lib\site-packages\torch\jit\__init__.py", line 1755, in __getstate__
    "Mixed serialization of script and non-script modules is not supported. " +
_pickle.PickleError: ScriptModules cannot be deepcopied using copy.deepcopy or saved using torch.save. Mixed serialization of script and non-script modules is not supported. For purely script modules use my_script_module.save(<filename>) instead.
Destructor called!

I’m stuck now and I’m not sure this should be addressed by Pytorch or the dill module? or even the pathos!
So any help is greatly appreciated

side note:
And oh by the way, Threading works becasue it runs under the same thread with concurrency, however the multiprocessing spawns a brand new process which is deep copied form he current process . Threading in python benifits only if your operations are IO bound and CPU is idle most of the time, but if your operations are cpu intensive themeselevs, threading wont give you any beinifts and may very well degrade your performance. thats why multiprocessing is used.