Hi,

I am currently working on a segmentation problem, where my target is a segmentation mask with only 2 different classes (0 for background, 1 for object).

Until now I was using the `NLLLoss2d`

, which works just fine, but I would like to add an additional pixelwise weighting to the object’s borders. I thought about creating a weight mask for each individual target, which will be calculated on the fly.

A similar solution was given in this thread: Pixelwise weights for MSELoss, so I tried to implement it for the `NLLLoss2d`

.

Here is my approach:

```
# Set properties
batch_size = 10
out_channels = 2
W = 10
H = 10
# Initialize logits etc. with random
logits = torch.FloatTensor(batch_size, out_channels, H, W).normal_()
target = torch.LongTensor(batch_size, H, W).random_(0, out_channels)
weights = torch.FloatTensor(batch_size, 1, H, W).random_(1, 3)
weights = Variable(weights)
# Calculate log probabilities
logp = F.log_softmax(logits)
# Gather log probabilities with respect to target
logp = logp.gather(1, target.view(batch_size, 1, H, W))
# Multiply with weights
weighted_logp = (logp * weights).view(batch_size, -1)
# Rescale so that loss is in approx. same interval
weighted_loss = weighted_logp.sum(1) / weights.view(batch_size, -1).sum(1)
# Average over mini-batch
weighted_loss = weighted_loss.mean()
```

I’m not sure about the rescaling part, where I sum the weighted log probabilities for the mini-batch and divide by the corresponding sum of weights (`weighted_logp.sum(1) / weights.view(batch_size, -1).sum(1)`

).

Am I right in assuming that this should keep the log probs in approx. the same interval even if one target has a lot borders, therefore a higher weight?

Also, should I sum or average the log probs (`weighted_logp.sum(1) / weighted_logp.mean(1)`

)?

I tried to search for the `F.nll_loss()`

implementation but couldn’t find it.

Does this approach makes sense or am I missing something?

Greets!