How to use multiple GPUs

I’m going to try training on multiple GPUs on AWS EC2 for the first time.

Now, I’m using single GPU on my own local PC.

It’s confusing because there are several different ways that I can choose for multiple GPUs training.

The simplest one looks below one.

According to https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html
is the only thing I should change just below part from code which uses single GPU?

device = torch.device("cuda:0")
model.to(device)
model = nn.DataParallel(model)

But I wonder whether I should change

device = torch.device("cuda:0")

into

device = torch.device("cuda:0,1,2,3")

if I use 4 GPUs.

1 Like