Regarding one hot encodings in Pytorch

I recently started with Pytorch. I was trying to build a model for multi-class image classification using pre-trained models from torchvision. My models seem to work fine. I used categorical entropy loss /NLLLoss.
I was wondering if internally in the Pytorch framework, the labels are being converted to one-hot encodings?

Not in the case of nn.CrossEntropyLoss or nn.NLLLoss (which are equivalent).
If you have a look at the formula to calculate the loss, you’ll see that only the current class index is needed to get the corresponding log probability. I guess other frameworks convert the one-hot encoded vector to an index representation for this reason (or just multiply with the one-hot vector).