RuntimeError: binary_cross_entropy and BCELoss are unsafe to autocast

I am using autocast with my model and running into the following error:

RuntimeError: torch.nn.functional.binary_cross_entropy and torch.nn.BCELoss are unsafe to autocast.
Many models use a sigmoid layer right before the binary cross entropy layer.
In this case, combine the two layers using torch.nn.functional.binary_cross_entropy_with_logits
or torch.nn.BCEWithLogitsLoss.  binary_cross_entropy_with_logits and BCEWithLogits are
safe to autocast.

I am trying to compare two quite similar models. The first one uses BCEWithLogitsLoss, so no issues there. The second one however does not. The second model uses the softmax function to calculate the probability of the positive example by using the softmax function. It does so by passing the output of the last layer to two dense layers. These are then stacked and the softmax function is applied over the stacked dimension. The pseudo code looks like this:

class Model(nn.Module):
    def __init__(self):
        self.pos_prob = torch.nn.Linear(100, 1)
        self.neg_prob = torch.nn.Linear(100, 1)
        self.softm = torch.nn.Softmax(dim=0)
        self.loss_fn = torch.nn.BCELoss

    def forward:(self, input, labels):
        probs_pos = self.pos_prob(input)
        probs_neg = self.neg_prob(input)

        probs = self.softm(torch.stack((probs_pos, probs_neg)))[0, :, :].squeeze()
        loss_fn = self.loss_fn()
        loss = loss_fn(probs, labels).view(-1, 1)
        return loss

is there any way to still use AMP?
Thanks in advance

Note that nn.BCELoss would usually require the usage of sigmoid not softmax as it could be used for a binary or a multi-label classification.
In any case, you could disable autocast using the context manager for this operation.