Pytorch custom criterion depending on target

Im doing a research project where I want to create a custom loss function depending on the targets. I.e. I want to penalize with BCEWithLogitsLoss plus adding a hyperparameter lambda. I only want to add this hyperparameter if the model is not correctly detecting a class.

With more detail, I have a pretrained model that I want to retrain freezing some of the layers. This model detects faces in images with some probability. I want to penalize certain kind of images if they are incorrectly classified with a factor lambda (suppose that the images that need that penalization have a special character in the name or so)

From the source code of pytorch:

import torch.nn.modules.loss as l

class CustomBCEWithLogitsLoss(l._Loss):
    def __init__(self, weight: Optional[Tensor] = None, size_average=None, reduce=None, reduction: str = 'mean',
                 pos_weight: Optional[Tensor] = None) -> None:
        super(BCEWithLogitsLoss, self).__init__(size_average, reduce, reduction)
        self.register_buffer('weight', weight)
        self.register_buffer('pos_weight', pos_weight)
        self.weight: Optional[Tensor]
        self.pos_weight: Optional[Tensor]

    def forward(self, input: Tensor, target: Tensor) -> Tensor:
        return F.binary_cross_entropy_with_logits(input, target,
                                                  self.weight,
                                                  pos_weight=self.pos_weight,
                                                  reduction=self.reduction)

Here, forward has two tensors as inputs, so I dont know how to add here the class of the images that I want to penalize with lambda. Adding lambda to the constructor is ok, but how to do the forward pass if it only allows tensors?

To clarify the question, Suppose that I have a training/testing folder with the images. The files with the character @ in the filename are the ones that I want to classify correctly way more than the files without the character, with a factor lambda.

How can I tell in the regular fashion of training a model in pytorch, that those files have to use a lambda penalization (let’s say that the loss function is lambda * BCEWithLogitLoss) but the other ones not? I’m using DataLoader.

You could most likely scale the unreduced loss and add the reduction afterwards.
To avoid the default "mean" reduction, use nn.BCEWithLogitsLoss(reduction="none") which will output the loss values for each sample. Scale the loss with your lambda weigths afterwards, and finally reduce the loss. You might also want to normalize the weight with the used weights to avoid creating a different loss ranges which would depend on the current class distribution.

I see. Could you post a small snippet?
Also, how would you access each sample in the batch with a data loader?

Here is an example using nn.CrossEntropyLoss showing how the loss weighting is applied on an unreduced loss and normalized afterwards.

The DataLoader returns a batch containing samples in dim0 (the batch dimension) so you can directly access them.