lijin_liu
(Lijin Liu)
April 28, 2017, 10:59am
1
Hello, I’m new to pytorch/ML.I’m trying to port the CenterLoss to torch, the networ architecture is here , roughly like:
convs ... - fc1 - fc2 - softmax_loss
|
| - custom_loss(center_loss)
My question is: how can I implement the multiple loss function at different layer in pytorch?
Thanks.
smth
April 29, 2017, 2:47pm
2
you simply write it as such.
def forward(x, y1, y2):
x1 = fc1(x)
x2 = fc2(x1)
l1 = center_loss(x1, y1)
l2 = softmax_loss(x2, y2)
return l1, l2
# later
torch.autograd.backward([l1, l2])
4 Likes
Paralysis
(Paralysis)
June 22, 2017, 9:17pm
3
torch.autograd.backward([l1, l2])
doesn’t work for me. It says
TypeError: backward() takes at least 2 arguments (1 given)
How can I write it correctly?
Thanks!
smth
June 22, 2017, 9:48pm
4
i think you have to write:
torch.autograd.backward([l1, l2], [torch.ones(1), torch.ones(1)])
2 Likes
jxgu1016
(JxGu)
August 1, 2017, 12:27pm
5
Another solution is to output the results of both fc1 and fc2 layers:
def forward(self, x):
...
return self.fc2(x), self.fc1(x)
1 Like
wang5566
(Wang5566)
January 12, 2018, 8:27am
6
Does torch.autograd.back([l1,l2]) mean this two loss backward for separate node? such as softmaxloss updates fc2 and layers before fc2, custom_loss updates fc1 and layers before fc1?
And if I use L = l1 + l2, L.backward(),
is L will update fc2 and fc1 separately?
smth
January 12, 2018, 12:09pm
7
@wang5566 torch.autograd.backward([l1,l2]) and L = l1 + l2, L.backward(), are the same
1 Like