I have trained a dataset having 5 different classes, with a model that produces output shape `[Batch_Size, 400]`

using Cross Entropy Loss and Adam OptimizerAdam Optimizer without Softmax Function.

After I get the predicted outputs `y_predict = model(images.to(device)) `

which is of shape `[Batch_Size, 400]`

. I used `_, pred_labels = torch.max(y_predict, 1)`

to get the predicted labels, which is of shape `[Batch_Size]`

. Then I compared it with actual labels of that batch `true_labels`

using `running_corrects += torch.sum(pred_labels == true_labels.data.to(device))`

to get the number of corrects.

In the model output with size `[Batch_Size, 400]`

each row with column indices 0 to 4 has non-zero values (My number of classes is 5). In each row from 5 to 400 all the values are zero. So When I used `_, pred_labels = torch.max(y_predict, 1)`

, the predicted_labels are coming from 1st 0 to 4 column indices. My model produced 97% accuracy.

Is there anyway I can make:

`torch.max`

function only uses 0 to 4 column indices of each row to get the corresponding label?

(Or)

Getting `[Batch_Size, 5]`

instead of `[Batch_Size, 400]`

without much effecting the accuracy.