Multi-gpu training of GANs

i have class for single gpu like:
class mmodel():

def setgpu(self, gpu)
    self.gpu = gpu
    self.disA.cuda(self.gpu)
    self.disB.cuda(self.gpu)
   self.disA2.cuda(self.gpu)
   self.disB2.cuda(self.gpu)
   self.enc_c.cuda(self.gpu)
   self.enc_a.cuda(self.gpu)
   self.gen.cuda(self.gpu)

Now in main function:
model = mmodel(opts)
model.setgpu(opts.gpu)

I want to run this model on 2 GPU’s anyone can help me? do i need to change the batch size? my batch size is 2:

If you would like to use data parallel, you could simply warp your model to nn.DataParallel or nn.DistributedDataParallel. Have a look at this tutorial for more information.

What if I have to sub-models, say model1 and model2. Do I need to write as follows?
model1 = nn.DataParallel(model1)
model2 = nn.DataParallel(model2)

If both submodules are registered into the same parent module, you could call nn.DataParallel just on the panel. If not, then your approach should work.

1 Like