Is my code for getting the predicted class correct? I don’t understand how softmax is being used here in cross entropy to get the class with the highest probability

for i, (x,y) in enumerate(zip(feature_trainloader,label_trainloader), 0):
optimizer.zero_grad()
output = self.forward(x)
target = y
loss = self.CrossEntropyLoss(output, target)
epoch_loss_train.append(loss.item())
y_true_train = torch.cat((y_true_train, target))
_, pred_class = torch.max(output, dim=1)
y_pred_train = torch.cat((y_pred_train, pred_class))
loss.backward()
optimizer.step()

nn.CrossEntropyLoss will apply F.log_Softmax and nn.NLLLoss internally to calculate the loss, which is why you shouldn’t apply a softmax activation on the model outputs.
To get the predictions you can apply torch.argmax directly on the logits, since the order won’t change compared to the probabilities (the highest/lowest logit will also be the highest/lowest probability).