Data portion to send to every GPU when using dataParalell

I have 2 GPUs with one being around two times faster (and double memory) than the other, i understand that the slower one will be a bottleneck when using dataParallell ,but is there a way(and if not, maybe it’s a good feature) to specify how much of the data to send to every GPU? the idea is, if GPU 1 is twice as fast as GPU 2, allocate half of the data of GPU1 to GPU2 and they will finish computation in similar times without GPU2 being the bottleneck.

There isn’t a way using PyTorch’s default nn.DataParallel class to do this, unfortunately.

You can try to do this by using the lower level constructs yourself, as the implementation is not that complicated: https://github.com/pytorch/pytorch/blob/master/torch/nn/parallel/data_parallel.py#L174-L189