Median based loss instead of mean

Is there a way to use the median as the reduction operation when computing the loss? I have this piece of code:

    output = model(data)
    loss = F.cross_entropy(output, target)
    loss.backward()

I would like to take the median of losses in a batch, instead of the mean, and get gradients based on that.

In that case, you can write a custom function that does the following pseudocode

loss = elementwise_cross_entropy(output, target)
median_val_index = index of loss median value  
return loss[median_val_index]

Is there more detail on how to create a new loss function using this approach? I cannot find the elementwise_cross_entropy in functional.py.

You can specify none as the reduction method and get the median from the result.

y_hat = torch.randn(3, 5, requires_grad=True)
y = torch.empty(3, dtype=torch.long).random_(5)

print(f"Outputs: \n{y_hat}\n")
print(f"Targets: \n{y}\n")

loss = torch.nn.CrossEntropyLoss()
loss_none = torch.nn.CrossEntropyLoss(reduction="none")

l_mean = loss(y_hat, y)
l_none = loss_none(y_hat, y)

print(f"Loss with 'mean' reduction: \n{l_mean}\n")
print(f"Loss with 'none' reduction: \n{l_none}\n")
print(f"Median loss with 'none' reduction: \n{torch.median(l_none)}")
# Output:
Outputs: 
tensor([[ 2.0904,  0.2319,  0.0346,  0.5896,  1.3256],
        [ 0.4279,  0.6223, -0.1241,  1.8375,  0.7244],
        [-0.4159,  0.0537, -0.2241,  0.4073,  0.6651]], requires_grad=True)

Targets: 
tensor([1, 3, 1])

Loss with 'mean' reduction: 
1.6558622121810913

Loss with 'none' reduction: 
tensor([2.5377, 0.6982, 1.7317], grad_fn=<NllLossBackward0>)

Median loss with 'none' reduction: 
1.7316656112670898

Hope this helps :smile:

1 Like