I am trying to backpropagate the error using a value computed externally by a heuristic. The custom loss function is the following.
The problem is, when lb (lambda) equals to zero, all the gradients are zero after the backward( ) loss call.
When lambda is a value > 0 (such as 0.001), the weight gradients will be different from zero.
Any idea ?
class FCLayerCustomLoss(torch.nn.Module): def __init__(self, model, lb = 0.001): super(FCLayerCustomLoss, self).__init__() self.model = model self.lb = lb def forward(self, score): dense0_params = torch.cat(tuple([x.view(-1) for x in self.model.dense0.parameters()])) l0_regularization = self.lb * torch.norm(dense0_params, p=2) return score + l0_regularization