Custom Dice Loss Function

Hi guys,
met some problem with the Dice loss function:

class DiceLoss(nn.Module):
def init(self, weight=None, size_average=True):
super(DiceLoss, self).init()
def forward(self, inputs, targets, smooth=1):
inputs = functional.sigmoid(inputs)
inputs = inputs.view(-1)
targets = targets.view(-1)
intersection = (inputs*targets).sum()
dice = (2.*intersection + smooth)/(inputs.sum() + targets.sum() + smooth)

    return 1-dice

def main():

loss_ = loss_fn(pred, label)

loss_fn in main() is DiceLoss()
and
pred shape: torch.Size([16, 4, 240, 240]) (batchsize, num_classes, width, height)
label shape: torch.Size([16, 240, 240]) (batchsize, width, height)(the element of tensor is in [0, 1, 2, 3], represent pixel class)
After flatten op:
pred shape: torch.Size([3686400])
label shape: torch.Size([921600])

THE BUG :RuntimeError: The size of tensor a (3686400) must match the size of tensor b (921600) at non-singleton dimension 0

what I am curious about is that there is no bug when I am using nn.CrossEntropyLoss() as my loss_fn,but when I used DiceLoss() , it turned to have a bug above.
Please help me why this bug occurs and how to achieve DiceLoss to get it worked.
Thank you so MUCH!!

Hi Double!

At issue is that your DiceLoss function is expecting labels that
are different in form than those used by CrossEntropyLoss (and
there is no reason to have expected them to be the same).

For a given batch sample and given pixel, CrossEntropyLoss
compares a vector of num_classes logits (akin to probabilities)
to a single integer class label.

In contrast, your DiceLoss compares a vector of predicted class
probabilities with a vector of targets class probabilities. (Either of
these could happen to be – or be restricted to – probabilities that
are all exactly either 0.0 or 1.0 (but if you convert your predictions
to 0.01.0 probabilities, you will lose differentiability and not be
able to train).

You will most likely prefer to use softmax() to convert your logit
predictions to probabilities:

        inputs = torch.nn.functional.softmax (inputs, dim = 1)

You will need to convert your integer-class-label targets to
(0.01.0) class probabilities, most conveniently with one_hot():

        targets = torch.nn.functional.one_hot (inputs, num_classes = num_classes).transpose (-1, 1)

(Please think about the shapes and structures of your tensors to be
sure that you understand why the .transpose() is necessary.)

Best.

K. Frank

1 Like

Hi, Frank:
Thank you so so much!
Your answer has solved my problem. Appreciate that!!!
In fact, I changed my code with your advice and it worked. Actually, I met some little problem like,

  1. Wrong Tensor Type: such as *(ops) indicated a bug:0 element has no grad_fn
    2.dot() was not implemented for Long()
    (These 2 bug I have solved)
    So I am thinking, Could you please recommend me some lessons or something that I can learn pytorch systematically ? There are always too many bugs, dealing with them one by one is not efficient…
    Thanks a lot !!!
    Best.

Hi Double!

I would recommend the various Pytorch Tutorials. Work through some
of the basic tutorials and then maybe through some others that catch
your eye or are relevant to the projects you are working on.

I haven’t seen nor read it, but Tom V, one of the pytorch developers
and forum participants, has a book out, “Deep Learning with PyTorch,”
and it’s bound to be good, based on Thomas’s knowledge and
experience.

(And of course – read the documentation!)

The bad news is that there will always be too many bugs that you will
have to deal with one by one. But they do get easier to find and fix as
you gain more systematic knowledge of pytorch.

Best.

K. Frank

1 Like

Hi Frank!
Thank you SO MUCH!! I will do the tutorials and also the “Deep Learning with PyTorch”.You are right about the bugs.Thanks again!!
Best.