Using multi gpu with different model

Hi there

When I use multi gpu process (nn.parallel) with different gpu such as ‘gtx 1080’ and ‘gtx 1080 ti’,
gtx 1080 ti has same computation power as gtx 1080 ?

Hi,

DataParallel divides batch equally if possible, probably your batch size is not big enough to occupy 1080ti model gpu. You can try to increase batch size. But then 1080ti would wait for 1080 to finish forward pass to calculate gradients.

1 Like

You divide the computations equally among all the gpus.
The speed and memory of each gpu in the set is equivalent to the slowest one.
This happens because the batch output is synchronized in the end. Therefore, even if the 1080 ti finishes earlier, it will wait for the 1080.
On the other hand, as the batch is equally distributed among gpus, the 1080 will get out of memory before the 1080 ti.

1 Like

Thanks for your reply