How to enforce symmetry in segmentation task loss

I have a segmentation-like task, where the output is highly unbalanced. In order to address the fact it’s unbalanced I use Focal Loss. I have another piece of information that I wish to use in the training loss and it’s the fact that the segmentation maps are always symmetric.

Is there a way to use this information in the loss?
I couldn’t find a way.

Try DiceLoss ?

And why dice loss will guarantee symmetry?

DiceLoss is often used when the data is unbalanced.
As for symmetry, I think it might be possible to train dataset using a half of gt.

Hi David!

As a general rule, if your problem has structure that (nearly) is always true,
it is best to see if you can build the structure into you your network so that
you get the desired result “automatically,” without the network having to
“learn” the structure through training.

Let me use simple reflection symmetry as an example. (You haven’t told
us what specific symmetry your problem exhibits.)

Let’s say that the input to your network are 1024x1024 single-channel
grayscale images that are not symmetric, and your ground-truth
segmentation masks are 1024x1024 binary masks that always have a
symmetry when reflected across the x-axis, that is, they are unchanged
when flipped left to right. Therefore all of the information in the masks is
contained in only their right halves – say, the 512x1024 unflipped right half.

So you could build a network that takes as input 512x1024x2-channel
images, where the two channels are the right half of your original image,
unchanged, and the left half, flipped left to right. Have the output of your
network (the predicted segmentation masks) be 512x1024 binary masks
(which will be natural, given the 512x104 input).

This network has the reflection symmetry “built in,” makes full use of all
of the information in the original (not-necessarily-symmetric) 1024x1024
input images, and predicts what can be understood as fully-symmetric
segmentation masks (which you could “unfold” back out to be full 1024x1024
symmetric masks if you wanted to for some reason).

If you can’t go this route, and want to use the loss function to encourage
symmetry of your predictions, you could add an “asymmetry penalty” to
your loss. Continuing with the reflection-symmetry example, take your
predicted asymmetrical, 1024x1024 mask, and calculate, say, the
intersection-over-union loss of the flipped 512x1024 left half versus the
unflipped 512x1024 right half. Add this penalty loss (with an adjustable
multiplicative weight) to whatever regular training loss you are using,
and train with this combined loss. As you make your penalty weight
increasing large, the training will be pushed to predict masks that are
closer and closer to being symmetric.


K. Frank

1 Like

So I can’t use the first approach because I have different spatial input sizes to the network.
Regarding the second approach, I have symmetry along the diagonal. But the example you gave was spot on.


Hi David!

I think you still could build symmetry into the network itself.

Pad your smaller images to be the size of the largest, and pad them to be
square, with the line of symmetry being, say, the lower-left-to-upper-right
main diagonal. Form the two-channel image out of the flipped and unflipped
halves. Set the upper-left (triangular) halves of both channels to zero, and
predict the mask (half of the symmetrical mask) that lies in the lower-right
triangular half of your square output “image.”


K. Frank