Output of Unet is a white image

I have trained a UNet.Unet architecture used The loss also decreased to 1e-4. But when I tried it on the train data, the output is a “WHITE” blank image. I have denormalized the output as well, but no success.

How many output classes are you using and how are you denormalizing the output?

2 classes, I normalised the images to 0.5 mean and 0.5 standard deviation. For denormalizing I did this:
img*0.5 +0.5

The output of your model will most likely not have the same normalization as your input.
Correct me, if I misunderstood the use case, but I assume your model outputs a 2-channel activation containing the class logits for each pixel.

Did you “denormalize” this output using the mentioned formula?

Could you try to get the predictions via torch.argmax(output, 1) and try to plot this instead?

Sir, the architecture I used outputs a single-channel image. The architecture used is:
My use case is segmenting road from a image.

class Unet(nn.Module):
    '''U-Net Architecture'''
    def __init__(self,inp,out):
        super(Unet,self).__init__()
        self.c1=self.contracting_block(inp,16)
        self.c2=self.contracting_block(16,32)
        self.c3=self.contracting_block(32,64)
        self.maxpool=nn.MaxPool2d(2)
        self.upsample=nn.Upsample(scale_factor=2,mode="bilinear",align_corners=True)
        self.c4=self.contracting_block(32+64,32)
        self.c5=self.contracting_block(16+32,16)
        self.c6=nn.Conv2d(16,out,1)

    def contracting_block(self,inp,out,k=3):
        block =nn.Sequential(
            nn.Conv2d(inp,out,k,padding=1),
            nn.ReLU(inplace=True),
            nn.BatchNorm2d(out),
            nn.Conv2d(out,out,k,padding=1),
            nn.ReLU(inplace=True),
            nn.BatchNorm2d(out)
        )
        return block


    def forward(self,x):
        conv1=self.c1(x) 
        x=self.maxpool(conv1)
        conv2=self.c2(x)
        x=self.maxpool(conv2)
        conv3=self.c3(x)
        x=self.upsample(conv3)
        x=torch.cat([conv2,x],axis=1)
        x=self.c4(x)
        x=self.upsample(x)
        x=torch.cat([conv1,x],axis=1)
        x=self.c5(x)
        x=self.c6(x)
        return x

The inference loop is as follows:

   img=cv2.resize(img,(256,256))
    img=trans(img).unsqueeze(0)
    img=img.type(torch.FloatTensor)
    img=img.to(device)
    mask=net(img)

    mask=mask[0].cpu().detach().numpy()
    mask=np.transpose(mask,(1,2,0))
    mask=mask*0.5+0.5
    cv2.imshow("mask",mask)
    img=img.squeeze(0).cpu().detach().numpy()
    img=np.transpose(img,(1,2,0))
    img=img*0.5+0.5
    cv2.imshow("img",img)
    cv2.waitKey(1)

@ptrblck Thank you very much for helping.