Computational stability of -torch.log(torch.exp(a)+torch.exp(b))

Suppose a is a tensor of size (1, 1, 256, 256, 256), it seems like
-torch.log(torch.exp(-|a|) + torch.exp(-|1-a|)) as the loss has computational stability issues like nan, are there any other implementations that can avoid this?

Would this native function be applicable to your use case and would it offer better numerical stability?
https://pytorch.org/docs/stable/generated/torch.logaddexp.html

thanks, let me try this one