Blank segmentation map for UNet - Can it be a bug in Pytorch?

Hi,

I’ve been recently trying to implement UNet using PyTorch. You can find the implementation here

I’m encountering a very weird problem here!
The networks apparently converges (reduction in loss value) but when I start to inference on some query images (from training set), the output is very poor, more often blank image. In other words, the predicted values per pixel starts to fall down near zero after several iterations! I’m using Sigmoid/BCELos() and Adam optimiser. The data and other stuff have been examined to be as what expected.

On the other hand, I have a successful implementation of UNet using theano, and I tried my best to keep the training procedure of two networks all the same (e.g. fixed training set, fixed hyper-parameters and so on), but poor performance with Pytorch!

I do not see any other reason for poor performance except bugs with PyTorch backends!!
Do you guys have any comment on the implementation? or any new direction to explore the reasons for poor performance when using PyTorch?

Thanks
Saeed

the link for the implementation doesn’t work. I suspect a lot of pixels in training images are of class 0 while only few with label 1. So the network marks everything 0 thus you get a very low loss value and blank segmentation output.

It seems he already solved this issue: UNet implementation