About nn.CrossEntropyLoss() output

I use nn.CrossEntropyLoss() to train my classfication model, but when I test the model I got some outputs are negative. Is it normal? I want to get probablity distribution for every class ? what should i do?

When predicting, put a softmax on the output