Suggested loss function for One-Hot Encoding Target

I realize this has been discussed at nauseam, but I just wanted to verify that if the output of my network is a vector [0.1, 0.5, 0.3, 0.1] and my target is a one-hot encoded vector [0, 0, 1, 0] I would use the nn.NLLLoss() function correct? Oh and for clarification, the last layer of my network is a LogSoftMax layer.

Yes, although you may want to consider returning the raw logits and combining the two with CrossEntropyLoss: CrossEntropyLoss — PyTorch 1.11.0 documentation

1 Like

Thanks! I was avoiding that so that I could see the output of the softmax (petty I know haha)