How to dealt with N=0 when using `smooth_L1 Loss`?

Cause I use a mask to filter the input and target. Sometimes the mask will be empty so that input and target will be empty too.

When it happen, if using nn.functional.smooth_l1_loss it will retuan a nan. What I fix it is return a zero. Any better way to fix it?

here’s my regression loss

    def regr_loss(self, regr, gt_regr, mask):
        mask = mask.gt(0)
        num = mask.float().sum()
        if num == 0:
            return Variable(torch.tensor(0).cuda())
        mask = mask.unsqueeze(1).expand_as(gt_regr)
        regr = regr[mask]
        gt_regr = gt_regr[mask]

        regr_loss = nn.functional.smooth_l1_loss(regr, gt_regr)
        regr_loss = regr_loss / (num + 1e-4)
        return regr_loss