BCEWithLogitsLoss training predication

Hello

I am using BCEWithLogitsLoss to train a UNet model, the input images are in a RGB format and target images are binary (0 to 1)

When training the model I looked at the shape of inputs in the batch, are they are returning:

torch.Size([8, 3, 250, 250]) - input RGB image
torch.Size([8, 1, 250, 250]) - target binary image

over training the predicated outcome has a shape of:
torch.Size([8, 1, 250, 250])

pred = model(images)
print(pred.shape)
loss = torch.nn.BCEWithLogitsLoss()(pred, true_masks)

I just wondered that normal for the predicated outcome to include the batch size, being 8 in this case?

Thank you

Yes, the model is expected to return an output for each sample. Since you have passed 8 samples as a batch to the model also 8 outputs are expected in the output batch.

Thank you

Within a batch for-loop it will be calling each function line 8 times behind the scenes? For example:

loss.backward()
optimizer.step()

If we look at the output of: pred = model(images), will this show the logits numbers? Rather than the sigmoid outcomes?

Thank you

No, since PyTorch layers accept batched inputs and will launch single kernels for each operation.

It depends on your model and if it’s returning raw logits from e.g. a linear layer of if a torch.sigmoid is applied on the output before returning it.

Sorry, I meant over training if we have:

pred = model(images),
loss = torch.nn.BCEWithLogitsLoss()(pred, true_masks)

would the output of pred be the logits numbers, so much wider range of numbers than 0 to 1? I just noticed the output over training has minus values, so just trying to understand what it represents. I have check the true_masks and they are binary with a max of 1.

Thanks