Multiple labels compressed during inference


I followed the @MONAI BTCV UNETR tutorial to segment my own dataset as described in this thread. The training results look reasonable, so now I’d like try the model with new data.

I adapted the UNET inference example for my UNETR model, but have made some error, as the output is [0-1], rather than the expected [0-50]. Any pointers on why this is occurring would be appreciated. I have posted my notebook here.

Could you verify that:

  • your model has indeed 51 output channels and returns the logits in the shape [batch_size, nb_classe=51, height, width]
  • the target tensors contain all class indices in [0, 50]

If so, could you try to overfit a single sample with a target containing a variety of class indices and check if the model is able to learn it?