Custom loss function dependent on the samples

I found the solution here. I thought for some reason that the loss was computed differently because of the error message. Turns out it is pretty easy to do it by simply changing the loss function to :

def custom_loss(delta):
    def loss(y_pred, y_true):
        y_pred_clamped = torch.clamp(y_pred, 1e-8, 1 - 1e-8)
        log_likelihood = y_true * torch.log(y_pred_clamped)
        return torch.sum(-log_likelihood * delta.unsqueeze(-1).expand_as(log_likelihood))
    return loss

Thanks alban for your time and making me realize my mistake :slight_smile: