Hi everyone,
I am wondering how to set up a multiple-GPUs-cross validation in Pytorch.
In the cross-validation part, my code(only the epoch part) looks like this:
for epoch in range(num_epochs):
loss = 0
train_total_loss=0
model.to(device)
model.train()
for batch_index, (x_batch, y_batch) in enumerate(train_loader):
x_batch,y_batch=x_batch.to(device),y_batch.to(device)
optimizer.zero_grad()
out = model(x_batch)
y_batch=y_batch.view(out.shape)
loss = torch.nn.MSELoss(y_batch,out)
loss.backward()
optimizer.step()
train_total_loss += loss.item()
train_total_loss = train_total_loss/train_loader.__len__() # loss of each epoch
Should I add torch.nn.DataParallel(model) before model.to(device) to use multiple GPUs? And should I change how the loss is calculated?
Thanks a lot for help!