Network learn opposite color from desirable

Hi,

My network learn opposite colour from desirable, I guess it somehow related to weights, but how can I improve this issue? I attached images, black veins are desirable where white veins are outcome of my network. (my network is UNet)
I trained it for 200 iterations, and I still don’t see desirable change.

Thanks,


![img2 copy 2|500x500](upload://2m 7mnxNPmp2FTnFVjlOrKzEf4rD.png)

Could you please provide more information? Maybe post a snippet of code on how do you get these images?

What is the range of values in the output and input? What is the size of the last convolution in your network?

If you are using a Unet the output result should be binary and the image you show has many shades. Look at Figure 2, the output of your image should only have black or white.

The main thing I recommend you to look is to use output.max(1) on the output of your network (if you already don’t do this). Here C is the number of the dimension where the channels are in your image. This is usually 1 when working with images: [batch_size, channels, height, width].

The case may be that the image you are seeing is the logits taken directly from the network output. If your network is training properly, the veins will have the highest pixel values and it might explain why they are white in your image.