Kaggle Data Science Bowl - Loss not decreasing and predictions are not good

I am attaching the url of the jupyter notebook. I can’t seem to figure out why my loss is not decreasing. I have been stuck on this for 3 days, any help is very much appreciated. The command to download the data is in the jupyter notebook.

Labels shape: (batch_size, 1, 256, 256)
Prediction shape: (batch_size, 1, 256, 256)

Is something wrong with how I calculate loss?

for epoch in range(15):
    for i, data in enumerate(trainloader):
        inputs, labels = data
        inputs = Variable(inputs.cuda())
        labels = Variable(labels.cuda())
        
        # forward + backward + optimize
        
        # zeroes the gradient buffers of all parameters
        optimizer.zero_grad()
        #forward pass
        outputs = model_pytorch(inputs)
        # calculate the loss
        loss = loss_function(outputs, labels)
        # backpropagation
        loss.backward()
        # Does the update after calculating the gradients
        optimizer.step()
        
        if (i+1) % 5 == 0:
            print('[%d, %5d] loss: %.4f' % (epoch, i+1, loss.data[0]))

Image of loss values

How did you define your loss_function?
Your collab notebook wants some permission, am I doing something wrong?

@ptrblck Here is the link for the notebook (I have updated it in the main post also)

Thanks for the link!

There seem to be some issues in your training loop.
You are doing the following:

inputs = Variable(inputs).cuda()
labels = Variable(labels).cuda()
    
outputs = model_pytorch(inputs)
probs = F.sigmoid(outputs)
probs_flat = probs.view(-1)
labels_flat = probs.view(-1)
loss = loss_function(probs_flat, labels_flat.float())

You are using probs for probs_flat and labels_flat. Your loss function should therefore return a zero loss.
However, you also call F.sigmoid() twice on outputs! The first time in the training loop and then again in your loss definition:

class DiceLoss(_Loss):
    def forward(self, input, target):
        return 1 - dice_coeff(F.sigmoid(input), target)

This seems to be wrong.
Could you fix that and see if your model is learning?

@ptrblck I changed the loss function to BCELoss. Can you please check it now?

Looks good! What does the loss do?
Also, your dice loss might be a good idea. I didn’t mean to criticize it!
You should however remove one of the sigmoid calls. :wink:

@ptrblck I just trained the model again and as you can see the loss is not decreasing. BCELoss is the binary cross-entropy loss. I have also given you edit access.