DataParallel throws an error “AttributeError: 'DataParallel' object has no attribute 'loss'”

Hello. I’m implementing my model on multi gpus. But it throws an error “AttributeError: ‘DataParallel’ object has no attribute ‘loss’”. So I want to know whether the DataParallel only duplicate the forward() function of our model ? And isn’t the loss() function duplicated? So, how should I do? Thanks a lot.

class MyModule1(nn.Module):
  def __init__(self):
    super(MyModule1, self).__init__()
    self.fc1 = nn.Linear(2, 1)

  def forward(self, x):
    x = self.fc1(x)
    return x

  def loss(self, x):
    loss = ......
    return loss

model1 = MyModule1()
model1 = DataParallel(model1)'cuda:0')
x = torch.tensor([10, 5], dtype=torch.float).to('cuda:0')
print model1.loss(x)

Because, model1 is now an object of class DataParallel, and it indeed does not have such a function or attribute.
You should do model1.module.loss(x)
But, then, it will run only on one GPU.

This, or if it’s possible you could try to call self.loss in your forward.
(Not sure if that fits your use case @jiang_ix )

1 Like

So how should I do if I want use multiple gpus?

So we can do not data.parallelization on a customized method. Could someone confirm this? If so, I will not search in this directory.

You would have to call your custom method in forward to make it work with nn.DataParallel.