Use multiple GPUs in cross validation?

Hi everyone,

I am wondering how to set up a multiple-GPUs-cross validation in Pytorch.
In the cross-validation part, my code(only the epoch part) looks like this:

for epoch in range(num_epochs):
            loss = 0
            train_total_loss=0
            model.to(device)
            model.train()
            for batch_index, (x_batch, y_batch) in enumerate(train_loader):
                x_batch,y_batch=x_batch.to(device),y_batch.to(device)
                optimizer.zero_grad()
                out = model(x_batch)
                y_batch=y_batch.view(out.shape)
                loss = torch.nn.MSELoss(y_batch,out)
                loss.backward()
                optimizer.step()
                train_total_loss += loss.item()
            train_total_loss =  train_total_loss/train_loader.__len__() # loss of each epoch 

Should I add torch.nn.DataParallel(model) before model.to(device) to use multiple GPUs? And should I change how the loss is calculated?

Thanks a lot for help!

It is recommended to use torch.nn.parallel.DistributedDataParallel see pointers here (Distributed Data Parallel — PyTorch master documentation) since DataParallel is not actively being worked on and will eventually be deprecated.

If you do want to use torch.nn.DataParallel (DataParallel — PyTorch master documentation) then you also need to specify the device IDs for each GPU that you want to use. For example, for two GPUs you would specify torch.nn.DataParallel(model, device_ids=[0, 1]) for cuda:0 and cuda:1, the model.to(device) is not necessary.