How to implement multiple loss function at diffent layer

Hello, I’m new to pytorch/ML.I’m trying to port the CenterLoss to torch, the networ architecture is here, roughly like:

convs ... - fc1 - fc2 - softmax_loss
             |
             |  - custom_loss(center_loss)

My question is: how can I implement the multiple loss function at different layer in pytorch?

Thanks.

you simply write it as such.

def forward(x, y1, y2):
    x1 = fc1(x)
    x2 = fc2(x1)
    l1 = center_loss(x1, y1)
    l2 = softmax_loss(x2, y2)
    return l1, l2

# later

torch.autograd.backward([l1, l2])
4 Likes

torch.autograd.backward([l1, l2])
doesn’t work for me. It says
TypeError: backward() takes at least 2 arguments (1 given)

How can I write it correctly?
Thanks!

i think you have to write:

torch.autograd.backward([l1, l2], [torch.ones(1), torch.ones(1)])
2 Likes

Another solution is to output the results of both fc1 and fc2 layers:

def forward(self, x):
      ...
    return self.fc2(x), self.fc1(x)
1 Like

Does torch.autograd.back([l1,l2]) mean this two loss backward for separate node? such as softmaxloss updates fc2 and layers before fc2, custom_loss updates fc1 and layers before fc1?
And if I use L = l1 + l2, L.backward(),
is L will update fc2 and fc1 separately?

@wang5566 torch.autograd.backward([l1,l2]) and L = l1 + l2, L.backward(), are the same

1 Like