Question about multi-GPU training

Hi
I have a 1080ti and now thinking should I get a 2080ti
If I apply DataParallel to train the model in Pytorch with 1080ti and 2080ti, will 1080ti become the bottleneck to the training process?

Sorry I am not familiarize with the DataParallel process.
Lets say the whole dataset can be divided into 10 batches, will each GPU be assigned with fixed number of batches (1080ti and 2080ti each will train 5 batches) ?

or if one GPU trains faster it will take more batches?

Thanks

If you are using nn.DataParallel, each GPU will get the same batch size (if possible).
If you look at the internals you might adapt the code and use a custom chunking approach.

However, I would generally advice against a system with mixed GPUs.
E.g. your 2080TI will have TensorCores, which could speedup FP16 calculations, while your 1080Ti won’t be able to do it.

thanks so much for your reply.
Just curious for the performance’s point of view, would you replace 1080ti by 2080ti ? or
train model with dual 1080ti gpu?