CUDA multiprocessing: training multiple model in different processes in single GPU

Hi! I had similar issue. After some googling I found this:

2nd link showed the solution:

def some_method():
    mp = torch.multiprocessing.get_context('forkserver')  # <-- This does the magic
    pool = mp.Pool(processes=1)  
    ........

Hope it helps!