I’m doing semantic segmentation of 3D tensors. I’m using a standard 3D UNet and DiceLoss()/BCELoss(). I’m using a standard IoU (Intersection over Union) measure to calculate the accuracy/error of the network.
This function continues to return a value of 0, when given the network output and target tensor, both of the shape [1,1,32,128,128]. The target tensor has 0 for background and 1 for points of interest. So the function should find somke interset eventually. But even after many thousand iterations the error returned is 0, although the loss decreases.
Is there any reason I am not getting any actual value and although the loss is decreasing, there is still no intersection between the network output and target??
Here is another type of error calculation and this seems to work fine:
def error(preds, targets):
assert preds.size() == targets.size()
bs, d, h, w = preds.size()
n_pixels = bs*d*h*w
incorrect = preds.ne(targets).cpu().sum()
# Divide the number of incorrectly classified pixel by the number of all pixels.
err = incorrect.numpy()/n_pixels
# return err
return round(err, 5)
This is returning an error value that seems to be working.
Isnt this doing the same thing as the dot operation in the IoU function?
The problem you met, I have also met a few days ago in my Accuracy function. In my opinion, the outputs of the model(we call it logits) and the targets are both Tensors, so there are two ways to calculate metrics, in numpy or in tensor. I have tried convert tensors to numpy array, it is okay but it cost so much time to calculate metrics at the end of each iteration. So I tired to calculate them in Tensor, you can try to print you union and intersection they are not 0, but IoU is zero may be caused by type error. In my code, I try to calculate sum of valid pixels and correct pixels and convert them to .long() and IoU calculated by .long() / .long() , then get value 0. Otherwise, you can also use .item() to get python scalar correspond to tensor.