3D U-net output interpretation

Hello,

I am trying to utilize the following open source 3D Unet: https://github.com/wolny/pytorch-3dunet

The repository includes links to train/eval/test data as well as config files to replicate their results however when I test, I notice that the output from the network has odd float values ie. -7.0234. I assumed the output would be a mask, [0,1] as it is a binary segmentation. Is there an operation I have to perform on the data to get it in correct mask format?
The following is included in the config code so I thought that would handle it and give me proper mask values?

  final_sigmoid: true
  is_segmentation: true

@ptrblck (Tagging because you have helped so many times when I reply and reading your replies to others haha)

Any help appreciated,

Kyle

Based on this comment from the repository, it seems the final activations are only used during prediction, not training:

apply final_activation (i.e. Sigmoid or Softmax) only during prediction. During training the network outputs logits and it’s up to the user to normalize it before visualising with tensorboard or computing validation metric

so the observed values seem reasonable for logits. :slight_smile:

To clarify, I meant that when I run an example through prediction, the output is still those float values. ie. I run the command

predict3dunet --config <CONFIG>

And the predicted mask contains values such as -7.0412 etc.

In that case either self.testing is still set to False (it seems to be set during initialization and I don’t know, why the internal self.training argument isn’t used) or self.final_activation is None.
Could you check both of these attributes and make sure they are set appropriately?