Generalized_box_iou_loss diverge to negative value

Hello, PyTorch community,

I’m currently working on an object detection task and I’m interested in implementing the Generalized Intersection over Union (GIoU) Loss instead of the usual MSELoss. While referring to the generalized_box_iou_loss function in PyTorch, I noticed that this loss function expects bounding box values to adhere to the condition 0 <= x1 < x2. I have a couple of questions regarding this:

  1. My bounding box regression values are normalized with respect to the image width and height. Should I directly input these normalized values into the GIoU loss function, or is it necessary to denormalize them before use?

  2. During the initial training phase, the model’s output can be quite unpredictable. This makes it challenging to ensure that the condition 0 <= x1 < x2 is always satisfied, potentially leading to negative loss values. In such cases, the optimizer tends to drive the loss towards negative infinity.

I’m seeking guidance on how to effectively train using the GIoU loss function under these conditions. Any insights or recommendations would be greatly appreciated.

Thank you in advance for your assistance!