How to change model output shape, so that model produces number of outputs = number of classes, without much effecting the accuracy?

I have trained a dataset having 5 different classes, with a model that produces output shape [Batch_Size, 400] using Cross Entropy Loss and Adam OptimizerAdam Optimizer without Softmax Function.

After I get the predicted outputs y_predict = model(images.to(device)) which is of shape [Batch_Size, 400] . I used _, pred_labels = torch.max(y_predict, 1) to get the predicted labels, which is of shape [Batch_Size] . Then I compared it with actual labels of that batch true_labels using running_corrects += torch.sum(pred_labels == true_labels.data.to(device)) to get the number of corrects.

In the model output with size [Batch_Size, 400] each row with column indices 0 to 4 has non-zero values (My number of classes is 5). In each row from 5 to 400 all the values are zero. So When I used _, pred_labels = torch.max(y_predict, 1) , the predicted_labels are coming from 1st 0 to 4 column indices. My model produced 97% accuracy.

Is there anyway I can make:
torch.max function only uses 0 to 4 column indices of each row to get the corresponding label?
(Or)
Getting [Batch_Size, 5] instead of [Batch_Size, 400] without much effecting the accuracy.

Hi @palguna_gopireddy,

You can just use slicing to get the first 5 columns,

x = torch.randn(batch_size, 400) #random output data
x_5col = x[:,:5] #get first 5 columns
x_5col.shape #returns [batch_size, 5]

Thank you. Is this valid as correct way of training, if I put y_predict[:,:5] in the training after y_predict = model(images.to(device)) in the training stage? (or) putting in the model itself.

That will depend on how your model is defined, and you may need to make changes accordingly.