U-Net implementation training is returning a Loss of -0.0 since the first iteration

As I mention in the title, I’m implementing a U-Net that I’m using for segmentation with retina images. The problem appears when I’m training the model, since the loss function is 0.0 all the time. The inputs are RGB images and the output of the neural network is a 1 channel image, while the label is a 1 channel BW image.

I’m currently trying to use 2 channel output instead 1 after reading various posts, but with that case it is not even letting me calculate the loss itself.