Why there is no LOG operator in implementation of torch.nn.NLLLoss

I have read the post. My understanding is that the numerical instability comes from the implementation of softmax and there are two alternatives to solve the problem, log-sum-exp is one choice that pytorch used. We can also seperate log and softmax but implement softmax in a more stable way by shifting the exponment.
But considering we need to calculate logarithm of softmax output in the NLLLoss, using the trick log-sum-exp will be faster. So we just early calculate logarithm of softmax output that should been calculated in NLLLoss.
Please let me know if I am wrong.