I am doing an medical image segmentation and I deal with a highly imbalanced dataset. In my error loss function (in the BCE part) I want to penalize False Negatives in order to get a higher Recall value.
avg_meter = defaultdict(float)
pbar = tqdm(enumerate(self.train_loader), total=len(self.train_loader), desc="Epoch {}".format(epoch), ncols=0)
self.attUnet.train(True)
meter = {}
acc_detection = 0.
recall_detection = 0.
precision_detection = 0.
DC_detection = 0.
length = 0
avg_loss = 0
for i, data in pbar:
images = data[0]
GT = data[1]
images, GT = Variable(images.cuda()), Variable(GT.cuda())
SR = self.attUnet(images)
SR_probs = torch.nn.functional.sigmoid(SR)
length += images.size(0)
loss = bce_dice_loss(SR_probs, GT)
avg_loss += loss.item()
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
Does this mean I have to get rid of this line to remove Sigmoid function:
SR_probs = torch.nn.functional.sigmoid(SR)
And also by set the pos_weight > 1, is this the right implementation:
pos_weight = torch.FloatTensor([1, 1.2])
criterion = torch.nn.BCEWithLogitsLoss(pos_weight)
loss = criterion(GT, SR)