nn.CrossEntropyLoss for conditional GAN enormous

I’ve tried with the default elementwise_mean and it’s still >400 unfortunately.
The Conditional GAN is a bit different, as the generator is explicitly given labels, but I could look at it again.

Is there perhaps another reason why the loss would be so high?
Also, a PyTorch forum post said that this is now the PyTorch equivalent of sigmoid_cross_entropy_loss_with_logits. Is that true, and is this loss appropriate for a multiclass problem? (i.e. >2 classes, but still only 1 class per image)