How cross entropy loss is calculated?

output with 4 classes 0,1,2,3,->>>>tensor([[-0.0820, 0.0076, -0.0952, 0.0285]]
real output ->>> tensor([1])
Loss->>tensor(1.3449, dtype=torch.float64, grad_fn=)

1*Log 0.0076 ???

No, it is log(softmax(.)).
Since what you have is the logits / unnormalized log probability. The softmax transforms it into a probability.

import torch
from torch import tensor
import torch.nn.functional as F

logits = tensor([[-0.0820, 0.0076, -0.0952, 0.0285]])
y = tensor([1])

#F.cross_entropy(logits, y)
-torch.log(F.softmax(logits, dim=1))
"""
tensor([[1.4345, 1.3449, 1.4477, 1.3240]]), ==> 1.3449 for y=1
"""

The advantage with pytorch cross_entropy loss function is that it effectively combines the log(softmax()) for stability reasons.

-F.log_softmax(logits, dim=1)

Thank u so much pascal