How to define a suitable "zero loss"?

For example, how to define a suitable “zero loss” so that it has no effect in net’s backward/step when I have not any box to regress.

...
if positive_count > 0:
    positive_indexes = (labels != 0).nonzero()[:, 0]
    boxes = boxes[positive_indexes]
    box_target = box_target[positive_indexes]
    box_loss = F.smooth_l1_loss(boxes, box_target)
else:
    box_loss = ???
box_loss.backward()
net.step()
...

How about

...
if positive_count > 0:
    positive_indexes = (labels != 0).nonzero()[:, 0]
    boxes = boxes[positive_indexes]
    box_target = box_target[positive_indexes]
    box_loss = F.smooth_l1_loss(boxes, box_target)
    box_loss.backward()
    net.step()
...

? You might need to if you are not in a scope and are seeing memory issues, you might need to del things to free up the graph. :slight_smile:

I do not like this… After all, I want to add it to other loss.

Your code didn’t say so. Then just use 0?

1 Like

OK, box_loss = Variable(torch.Tensor([0]).type_as(class_loss.data)).

If you want to add to another loss, you can just use plain python numeric 0.

1 Like

There is some code below:

print(class_loss.data[0])
print(box_loss.data[0])

So for the consistent code, I have to use the Variable to wrap 0.