Can every model on pytorch be parallelised?

I am trying to run a deep learning model for the segmentation task.
I have 2 gpus.
My doubt is whether can I parallelise my model or not?
My batch size is 1 and can’t change it (maybe change it but I have to figure that out) because of some image transformations in the code.

You could most likey use nn.DistributedDataParallel to use both GPUs each using a batch size of 1. I assume your model fits on a single GPU and you don’t need to split it onto two devices.
Would that work for you?

1 Like

Thanks.
Yes, that would work for me.I’ll try and will post here it again if it doesn’t work.

okay, I tried wrapping my head around nn.DistributedDataParallel, following this link -
https://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel

but I still not figure out how to do it as tutorial just tells to include following lines for single process multi-gpu.

torch.distributed.init_process_group(backend="nccl")
model = DistributedDataParallel(model)