Negative Examples for Image Segmentation

Can anyone point me in the right direction for actively incorporating negative class examples during training of a segmentation network?

Most papers that I’m aware of are just labeling negative classes as background (i.e. network output = 0) but I am wondering if there are any more pro-active approaches to it.

It may be beneficial to perform binary classification on the image to better determine if the object you are interested in segmenting is in it or not. If you are using a U-Net architecture, you can do both classification and segmentation at once by passing the output of the middle layer (which contains the context of the input) into a single output dense layer with sigmoid activation. Any positive pixels in the mask generated by the second half of the U-Net can be disregarded if the binary classification deems that the object is not in the image. I used this technique in a pneumothorax segmentation competition hosted on Kaggle, and it improved my score.

I discovered this also myself and am already using an additional classifier. Although this does indeed increase the performance, it may fail for cases where Class A and B are present on the image and both get labeled to be Class A.

By the way, it seems like that the additional classification loss alone improves the segmentation results even when the classification result is not used for calculating the final output. I think it just helps to learn better filters in the encoder.