Model Inference when Training using BCEWithLogitloss

Hi, when Training using BCEWithLogitLoss, we know that it internally apply sigmoid for us, and we don’t need any sigmoid again in our model.

But for inference, do we need to add any sigmoid in the last layer of our model? To make the logits become (0,1)?

It is applied together with the loss for numerical stability, thus, you need to compute it manually at inference.

1 Like