i have class for single gpu like:
def setgpu(self, gpu)
self.gpu = gpu
Now in main function:
model = mmodel(opts)
I want to run this model on 2 GPU’s anyone can help me? do i need to change the batch size? my batch size is 2:
If you would like to use data parallel, you could simply warp your model to
nn.DistributedDataParallel. Have a look at this tutorial for more information.
What if I have to sub-models, say model1 and model2. Do I need to write as follows?
model1 = nn.DataParallel(model1)
model2 = nn.DataParallel(model2)
If both submodules are registered into the same parent module, you could call
nn.DataParallel just on the panel. If not, then your approach should work.