nn.DataParallel: mismatch between batch size of input and target

I used nn.DataParallel for increasing the batch size of a large model. While running, I am getting the error stating that: “Mismatch between the batch size of the input and target”
I used nn.DataParallel(model, device_list).cuda()

The batch size shown in the error message is 3 times that of the batch size(I am using 3 GPUs). I checked some of the examples and my understanding is that nothing else is needed to be done in order to replicate the model into multiple GPUs and share the batches among that. I would like to know if something else is needed to be done or my understanding about the whole approach is plain wrong!

Thanks in advance

I got the same problem. Have you solve it?

Nope. Didn’t find anything.

I also have the same problem…

Make sure that the first dimension is always batch while inputting to your model.