CrossEntropyLoss only calculates for the node of the class of the label but not others?

I know that the CrossEntropyLoss requires logits.

My problem is that from what I learnt from textbook or from courses, for each data sample, cross entropy should calculate for each output node then sum them up.

The following assumes that we have one data sample passed to the network and we have the output logit z vector. We have three classes. The label for this data sample is 0.

Why it only calculates for the node of the label’s class? but not summing up all the losses of each output node?

It seems you are trying to apply the binary cross entropy loss formula to the multi-class nn.CrossEntropyLoss, which won’t work.
Instead of using the -[1 * (...) + (1 - y) * (...)] approach (which would be correct for the binary use case) you would have to multiply the targets directly as [y[0] * (...) + y[1] * (...) + y[2] * (...)], which would then reduce to y[0] * (...) only since only a single class is active.