I am attaching the url of the jupyter notebook. I can’t seem to figure out why my loss is not decreasing. I have been stuck on this for 3 days, any help is very much appreciated. The command to download the data is in the jupyter notebook.

for epoch in range(15):
for i, data in enumerate(trainloader):
inputs, labels = data
inputs = Variable(inputs.cuda())
labels = Variable(labels.cuda())
# forward + backward + optimize
# zeroes the gradient buffers of all parameters
optimizer.zero_grad()
#forward pass
outputs = model_pytorch(inputs)
# calculate the loss
loss = loss_function(outputs, labels)
# backpropagation
loss.backward()
# Does the update after calculating the gradients
optimizer.step()
if (i+1) % 5 == 0:
print('[%d, %5d] loss: %.4f' % (epoch, i+1, loss.data[0]))

You are using probs for probs_flat and labels_flat. Your loss function should therefore return a zero loss.
However, you also call F.sigmoid() twice on outputs! The first time in the training loop and then again in your loss definition:

Looks good! What does the loss do?
Also, your dice loss might be a good idea. I didn’t mean to criticize it!
You should however remove one of the sigmoid calls.

@ptrblck I just trained the model again and as you can see the loss is not decreasing. BCELoss is the binary cross-entropy loss. I have also given you edit access.