The repository includes links to train/eval/test data as well as config files to replicate their results however when I test, I notice that the output from the network has odd float values ie. -7.0234. I assumed the output would be a mask, [0,1] as it is a binary segmentation. Is there an operation I have to perform on the data to get it in correct mask format?
The following is included in the config code so I thought that would handle it and give me proper mask values?
final_sigmoid: true
is_segmentation: true
@ptrblck (Tagging because you have helped so many times when I reply and reading your replies to others haha)
Based on this comment from the repository, it seems the final activations are only used during prediction, not training:
apply final_activation (i.e. Sigmoid or Softmax) only during prediction. During training the network outputs logits and it’s up to the user to normalize it before visualising with tensorboard or computing validation metric
so the observed values seem reasonable for logits.
In that case either self.testing is still set to False (it seems to be set during initialization and I don’t know, why the internal self.training argument isn’t used) or self.final_activation is None.
Could you check both of these attributes and make sure they are set appropriately?