acgtyrant
(acgtyrant)
1
For example, how to define a suitable “zero loss” so that it has no effect in net’s backward/step when I have not any box to regress.
...
if positive_count > 0:
positive_indexes = (labels != 0).nonzero()[:, 0]
boxes = boxes[positive_indexes]
box_target = box_target[positive_indexes]
box_loss = F.smooth_l1_loss(boxes, box_target)
else:
box_loss = ???
box_loss.backward()
net.step()
...
SimonW
(Simon Wang)
2
How about
...
if positive_count > 0:
positive_indexes = (labels != 0).nonzero()[:, 0]
boxes = boxes[positive_indexes]
box_target = box_target[positive_indexes]
box_loss = F.smooth_l1_loss(boxes, box_target)
box_loss.backward()
net.step()
...
? You might need to if you are not in a scope and are seeing memory issues, you might need to del things to free up the graph. 
acgtyrant
(acgtyrant)
3
I do not like this… After all, I want to add it to other loss.
SimonW
(Simon Wang)
4
Your code didn’t say so. Then just use 0?
1 Like
acgtyrant
(acgtyrant)
5
OK, box_loss = Variable(torch.Tensor([0]).type_as(class_loss.data))
.
SimonW
(Simon Wang)
6
If you want to add to another loss, you can just use plain python numeric 0.
1 Like
acgtyrant
(acgtyrant)
7
There is some code below:
print(class_loss.data[0])
print(box_loss.data[0])
So for the consistent code, I have to use the Variable to wrap 0.