Hi, I have a semantic segmentation network. At some point, I want to define a loss function that calculates the boundaries in the predicted image. Even though the task is for binary images, the output of the network is in logits. Is there any differentiable way to transform the predicted mask from logits to a binary image? I tried the softargmax but it did not help.

Moreover, in this paper “A Novel Boundary Loss Function in Deep Convolutional Networks to Improve the Buildings Extraction From High-Resolution Remote Sensing Images” they propose a binarization function but I don’t understand how it changes the dimensions. Can anyone help?

Do you want the segmentation network to output a mask with two classes - Border and background?

No my prediction will be in the building and background, I want to use boundaries in the loss function

Hi Yunus!

There is no usefully differentiable way to map to a binary (or any discrete)

result. For the range of input values that map to `0`

, the derivative will be

zero. Right when you jump to `1`

, the derivative will be technically undefined.

Then for the range that maps to `1`

, the derivative will be zero again.

So even though well defined almost everywhere, the derivative provides

no useful information about how to minimize your loss function with gradient

descent.

In general, you would want to figure out how to use the logits directly in your

loss function. (`BCEWithLogitsLoss`

does so quite effectively for binary

classification problems.)

If you must (but you probably don’t want to for reasons of numerical stability),

you can convert the logits to probabilities by passing them through `sigmoid()`

.

(Another approach – that I don’t recommend – would be to binarize your

logits, but then use surrogate derivatives that approximate your binarization

derivatives and that somehow capture the information you need to be able

to perform gradient descent.)

Without knowing what your loss function looks like, it’s hard to give concrete

advice.

Best.

K. Frank