Take multiple gpu as single one

Suppose that I have a big model that cannot fit into one gpu, so I have to split the model to different gpus. I’m wondering whether there’s a way to make Pytorch take multiple gpu as a single one, so that we don’t have to split model manually.

I heard nn.DataParallel for using on two or more Gpu,
Taking multiple gpu and using as one is great question, also looking for answer to this

I’m wondering whether there’s a way to make PyTorch take multiple gpu as a single one, so that we don’t have to split model manually.

Currently, this is not available, we are working on adding a model partitioning feature.

Manually splitting model shouldn’t be too hard with today’s PyTorch API, you just need to append .to(device) to certain layers and outputs. See this tutorial: https://pytorch.org/tutorials/intermediate/model_parallel_tutorial.html