I’m trying to train a model based on segmentation using Encoder (LBCnet) and decoder (Unet). My dataset contains biomedical images of skin lesion and corresponding masks, but whenever i run it losses corresponding to each batch give me very big negative values and i can’t figure out why?.
10 epochs, 260 total_steps per epoch
Epoch [1/10], Step [100/260], Loss: -53.379780
Epoch [1/10], Step [200/260], Loss: -49.816761
Epoch [2/10], Step [100/260], Loss: -23.087646
Epoch [2/10], Step [200/260], Loss: -48.471092
Epoch [3/10], Step [100/260], Loss: -52.749981
Epoch [3/10], Step [200/260], Loss: -29.466999
Epoch [4/10], Step [100/260], Loss: -57.213978
Epoch [4/10], Step [200/260], Loss: -20.504835
Epoch [5/10], Step [100/260], Loss: -80.539688
Epoch [5/10], Step [200/260], Loss: -70.907478
Epoch [6/10], Step [100/260], Loss: -62.066265
Epoch [6/10], Step [200/260], Loss: -51.600151
Epoch [7/10], Step [100/260], Loss: -24.601454
Epoch [7/10], Step [200/260], Loss: -72.403343
Epoch [8/10], Step [100/260], Loss: -45.917526
Epoch [8/10], Step [200/260], Loss: -62.943592
Epoch [9/10], Step [100/260], Loss: -36.994308
Can you please format your code properly and post it again?
Hi Omran!
Responding solely to the title of your post:
BCELoss
requires both its input
and its target
to be probabilities,
that is, numbers between zero and one. (input
must be in (0, 1)
,
exclusive, otherwise you can get -inf
, while target
can be in
[0, 1]
, inclusive, that is can be equal to 0 or 1.)
If you go outside of these ranges, you can get negative values.
(In general, you will prefer BCEWithLogitsLoss
over BCELoss
.)
Good luck.
K. Frank
Thank you for your respond.
Thanks KFrank. I’ll work around to check out my probability before feeding them to LCE loss.
Hello Guys, I have fixed the negative values issues by Normalizing my data. Now I’m facing another issue which is the accuracy is very high, I think there is something wrong with my accuracy code. Can you please look at my code and let me know what is going on. Thank you.
for epoch in range(num_epochs):
running_loss = 0
total_train = 0
correct_train = 0
loss_values = []
running_loss = 0.0
for i, data in enumerate(train_loader, 0):
# get the inputs
t_image, mask = gen.__getitem__(0)
t_image, mask = torch.Tensor(t_image), torch.Tensor(mask)
t_image = t_image.view([t_image.shape[0], 1 , t_image.shape[1], t_image.shape[2]])
mask = mask.view([mask.shape[0], 1 , mask.shape[1], mask.shape[2]])
t_image, mask = Variable(t_image.float()), Variable(mask.float())
optimizer.zero_grad()
output = model(t_image) # forward
outputs = torch.sigmoid(output)
loss = criterion(outputs, mask)
loss.backward() # back propagation
optimizer.step() # update gradients
running_loss += loss.item()
mask = torch.tensor(mask, dtype=torch.long, device=device)
running_loss =+ loss.item() * t_image.size(0)
accuracy
_, predicted = torch.max(outputs.data, 1)
total_train += mask.nelement() #mask.size(0)
correct_train += predicted.eq(mask.data).sum().item()
train_accuracy = 100 * correct_train / total_train
#avg_accuracy += train_accuracy
print("Epoch {}/{}, Train Loss: {:.3f}, Train Accuracy: {:.3f}".format(epoch+1, num_epochs, loss.item(), train_accuracy))
Epoch 1/1, Train Loss: 0.718, Train Accuracy: 606.793
Epoch 1/1, Train Loss: 0.893, Train Accuracy: 606.793
Epoch 1/1, Train Loss: 0.633, Train Accuracy: 606.793
Epoch 1/1, Train Loss: 0.376, Train Accuracy: 606.793
Epoch 1/1, Train Loss: 0.275, Train Accuracy: 606.793
Epoch 1/1, Train Loss: 0.201, Train Accuracy: 606.793
Epoch 1/1, Train Loss: 0.172, Train Accuracy: 606.793
Epoch 1/1, Train Loss: 0.170, Train Accuracy: 606.793
Epoch 1/1, Train Loss: 0.144, Train Accuracy: 606.793
Epoch 1/1, Train Loss: 0.165, Train Accuracy: 606.793
Epoch 1/1, Train Loss: 0.154, Train Accuracy: 606.793
Epoch 1/1, Train Loss: 0.131, Train Accuracy: 606.793
Epoch 1/1, Train Loss: 0.152, Train Accuracy: 606.793
Epoch 1/1, Train Loss: 0.217, Train Accuracy: 606.793
Epoch 1/1, Train Loss: 0.178, Train Accuracy: 606.793
Epoch 1/1, Train Loss: 0.174, Train Accuracy: 606.793