Traning and testing loss is high

Hello I hope you are doing well,
I am solving a classification problem with two main classes.
I am using GRU and linear layers in my model, Adam Optimizer and CrossEntropyLoss loss function.
Both loss and accuracy are inaccurate for some unknown reason for me!
like the accuracy is really low and the loss reaches 75%
Do you know what could be the gap and why I am getting such a results ?

Thanks

It seems your model or training procedure might have a bug.
The best way of making sure your code works is to try to overfit your model on a small sample (e.g. just one single sample).
If your model cannot overfit the single sample, something in the architecture or training might be wrong.
Could you try to do that and report the results?

Thanks, here is the result of running. 100 epochs 1sample for training and 1 for testing. " the values of auc_roc always nan?

Epoch 1/100 loss: 0.6639 - acc: 1.0000 - val_loss: 0.6122 - val_acc: 1.0000 - val_roc: 000nan
Epoch 2/100 loss: 0.6447 - acc: 1.0000 - val_loss: 0.6142 - val_acc: 1.0000 - val_roc: 000nan
Epoch 3/100 loss: 0.6999 - acc: 0.0000 - val_loss: 0.6368 - val_acc: 1.0000 - val_roc: 000nan
Epoch 4/100 loss: 0.6451 - acc: 1.0000 - val_loss: 0.5434 - val_acc: 1.0000 - val_roc: 000nan
Epoch 5/100 loss: 0.6243 - acc: 1.0000 - val_loss: 0.6269 - val_acc: 1.0000 - val_roc: 000nan
Epoch 6/100 loss: 0.5886 - acc: 1.0000 - val_loss: 0.5920 - val_acc: 1.0000 - val_roc: 000nan
Epoch 7/100 loss: 0.5408 - acc: 1.0000 - val_loss: 0.5496 - val_acc: 1.0000 - val_roc: 000nan
Epoch 8/100 loss: 0.6247 - acc: 1.0000 - val_loss: 0.5303 - val_acc: 1.0000 - val_roc: 000nan
Epoch 9/100 loss: 0.6389 - acc: 1.0000 - val_loss: 0.5526 - val_acc: 1.0000 - val_roc: 000nan
Epoch 10/100 loss: 0.7407 - acc: 0.0000 - val_loss: 0.6292 - val_acc: 1.0000 - val_roc: 000nan
Epoch 11/100 loss: 0.5630 - acc: 1.0000 - val_loss: 0.6084 - val_acc: 1.0000 - val_roc: 000nan
Epoch 12/100 loss: 0.6090 - acc: 1.0000 - val_loss: 0.6732 - val_acc: 1.0000 - val_roc: 000nan
Epoch 13/100 loss: 0.5534 - acc: 1.0000 - val_loss: 0.6409 - val_acc: 1.0000 - val_roc: 000nan
Epoch 14/100 loss: 0.6593 - acc: 1.0000 - val_loss: 0.6653 - val_acc: 1.0000 - val_roc: 000nan
Epoch 15/100 loss: 0.5573 - acc: 1.0000 - val_loss: 0.4994 - val_acc: 1.0000 - val_roc: 000nan
Epoch 16/100 loss: 0.5948 - acc: 1.0000 - val_loss: 0.6182 - val_acc: 1.0000 - val_roc: 000nan
Epoch 17/100 loss: 0.5219 - acc: 1.0000 - val_loss: 0.6496 - val_acc: 1.0000 - val_roc: 000nan
Epoch 18/100 loss: 0.4764 - acc: 1.0000 - val_loss: 0.5855 - val_acc: 1.0000 - val_roc: 000nan
Epoch 19/100 loss: 0.5772 - acc: 1.0000 - val_loss: 0.6684 - val_acc: 1.0000 - val_roc: 000nan
Epoch 20/100 loss: 0.5014 - acc: 1.0000 - val_loss: 0.5266 - val_acc: 1.0000 - val_roc: 000nan
Epoch 21/100 loss: 0.5669 - acc: 1.0000 - val_loss: 0.6259 - val_acc: 1.0000 - val_roc: 000nan
Epoch 22/100 loss: 0.5227 - acc: 1.0000 - val_loss: 0.5831 - val_acc: 1.0000 - val_roc: 000nan
Epoch 23/100 loss: 0.6289 - acc: 1.0000 - val_loss: 0.5928 - val_acc: 1.0000 - val_roc: 000nan
Epoch 24/100 loss: 0.5928 - acc: 1.0000 - val_loss: 0.6153 - val_acc: 1.0000 - val_roc: 000nan
Epoch 25/100 loss: 0.5538 - acc: 1.0000 - val_loss: 0.5555 - val_acc: 1.0000 - val_roc: 000nan
Epoch 26/100 loss: 0.5515 - acc: 1.0000 - val_loss: 0.4815 - val_acc: 1.0000 - val_roc: 000nan
Epoch 27/100 loss: 0.6470 - acc: 1.0000 - val_loss: 0.5834 - val_acc: 1.0000 - val_roc: 000nan
Epoch 28/100 loss: 0.5991 - acc: 1.0000 - val_loss: 0.6326 - val_acc: 1.0000 - val_roc: 000nan
Epoch 29/100 loss: 0.5483 - acc: 1.0000 - val_loss: 0.5693 - val_acc: 1.0000 - val_roc: 000nan
Epoch 30/100 loss: 0.4610 - acc: 1.0000 - val_loss: 0.5385 - val_acc: 1.0000 - val_roc: 000nan
Epoch 31/100 loss: 0.6384 - acc: 1.0000 - val_loss: 0.4610 - val_acc: 1.0000 - val_roc: 000nan
Epoch 32/100 loss: 0.6100 - acc: 1.0000 - val_loss: 0.4956 - val_acc: 1.0000 - val_roc: 000nan
Epoch 33/100 loss: 0.5538 - acc: 1.0000 - val_loss: 0.5950 - val_acc: 1.0000 - val_roc: 000nan
Epoch 34/100 loss: 0.4978 - acc: 1.0000 - val_loss: 0.4806 - val_acc: 1.0000 - val_roc: 000nan
Epoch 35/100 loss: 0.5172 - acc: 1.0000 - val_loss: 0.5676 - val_acc: 1.0000 - val_roc: 000nan
Epoch 36/100 loss: 0.5714 - acc: 1.0000 - val_loss: 0.5535 - val_acc: 1.0000 - val_roc: 000nan
Epoch 37/100 loss: 0.6651 - acc: 1.0000 - val_loss: 0.5463 - val_acc: 1.0000 - val_roc: 000nan
Epoch 38/100 loss: 0.5412 - acc: 1.0000 - val_loss: 0.6234 - val_acc: 1.0000 - val_roc: 000nan
Epoch 39/100 loss: 0.5188 - acc: 1.0000 - val_loss: 0.5380 - val_acc: 1.0000 - val_roc: 000nan
Epoch 40/100 loss: 0.5751 - acc: 1.0000 - val_loss: 0.6268 - val_acc: 1.0000 - val_roc: 000nan
Epoch 41/100 loss: 0.4608 - acc: 1.0000 - val_loss: 0.5827 - val_acc: 1.0000 - val_roc: 000nan
Epoch 42/100 loss: 0.5282 - acc: 1.0000 - val_loss: 0.4714 - val_acc: 1.0000 - val_roc: 000nan
Epoch 43/100 loss: 0.5524 - acc: 1.0000 - val_loss: 0.4762 - val_acc: 1.0000 - val_roc: 000nan
Epoch 44/100 loss: 0.5560 - acc: 1.0000 - val_loss: 0.4422 - val_acc: 1.0000 - val_roc: 000nan
Epoch 45/100 loss: 0.5200 - acc: 1.0000 - val_loss: 0.5564 - val_acc: 1.0000 - val_roc: 000nan
Epoch 46/100 loss: 0.5861 - acc: 1.0000 - val_loss: 0.5194 - val_acc: 1.0000 - val_roc: 000nan
Epoch 47/100 loss: 0.4303 - acc: 1.0000 - val_loss: 0.5422 - val_acc: 1.0000 - val_roc: 000nan
Epoch 48/100 loss: 0.4627 - acc: 1.0000 - val_loss: 0.4563 - val_acc: 1.0000 - val_roc: 000nan
Epoch 49/100 loss: 0.4126 - acc: 1.0000 - val_loss: 0.5426 - val_acc: 1.0000 - val_roc: 000nan
Epoch 50/100 loss: 0.4448 - acc: 1.0000 - val_loss: 0.4956 - val_acc: 1.0000 - val_roc: 000nan
Epoch 51/100 loss: 0.4756 - acc: 1.0000 - val_loss: 0.3807 - val_acc: 1.0000 - val_roc: 000nan
Epoch 52/100 loss: 0.4906 - acc: 1.0000 - val_loss: 0.4367 - val_acc: 1.0000 - val_roc: 000nan
Epoch 53/100 loss: 0.5350 - acc: 1.0000 - val_loss: 0.4401 - val_acc: 1.0000 - val_roc: 000nan
Epoch 54/100 loss: 0.5158 - acc: 1.0000 - val_loss: 0.5971 - val_acc: 1.0000 - val_roc: 000nan
Epoch 55/100 loss: 0.3638 - acc: 1.0000 - val_loss: 0.4439 - val_acc: 1.0000 - val_roc: 000nan
Epoch 56/100 loss: 0.4309 - acc: 1.0000 - val_loss: 0.4926 - val_acc: 1.0000 - val_roc: 000nan
Epoch 57/100 loss: 0.5687 - acc: 1.0000 - val_loss: 0.5362 - val_acc: 1.0000 - val_roc: 000nan
Epoch 58/100 loss: 0.4342 - acc: 1.0000 - val_loss: 0.5274 - val_acc: 1.0000 - val_roc: 000nan
Epoch 59/100 loss: 0.5823 - acc: 1.0000 - val_loss: 0.5437 - val_acc: 1.0000 - val_roc: 000nan
Epoch 60/100 loss: 0.4977 - acc: 1.0000 - val_loss: 0.4626 - val_acc: 1.0000 - val_roc: 000nan
Epoch 61/100 loss: 0.4301 - acc: 1.0000 - val_loss: 0.5634 - val_acc: 1.0000 - val_roc: 000nan
Epoch 62/100 loss: 0.5764 - acc: 1.0000 - val_loss: 0.4220 - val_acc: 1.0000 - val_roc: 000nan
Epoch 63/100 loss: 0.4134 - acc: 1.0000 - val_loss: 0.4579 - val_acc: 1.0000 - val_roc: 000nan
Epoch 64/100 loss: 0.4567 - acc: 1.0000 - val_loss: 0.5778 - val_acc: 1.0000 - val_roc: 000nan
Epoch 65/100 loss: 0.4165 - acc: 1.0000 - val_loss: 0.5290 - val_acc: 1.0000 - val_roc: 000nan
Epoch 66/100 loss: 0.3902 - acc: 1.0000 - val_loss: 0.4509 - val_acc: 1.0000 - val_roc: 000nan
Epoch 67/100 loss: 0.3772 - acc: 1.0000 - val_loss: 0.5043 - val_acc: 1.0000 - val_roc: 000nan
Epoch 68/100 loss: 0.3754 - acc: 1.0000 - val_loss: 0.3601 - val_acc: 1.0000 - val_roc: 000nan
Epoch 69/100 loss: 0.3820 - acc: 1.0000 - val_loss: 0.4563 - val_acc: 1.0000 - val_roc: 000nan
Epoch 70/100 loss: 0.3793 - acc: 1.0000 - val_loss: 0.4135 - val_acc: 1.0000 - val_roc: 000nan
Epoch 71/100 loss: 0.5005 - acc: 1.0000 - val_loss: 0.4114 - val_acc: 1.0000 - val_roc: 000nan
Epoch 72/100 loss: 0.5058 - acc: 1.0000 - val_loss: 0.4379 - val_acc: 1.0000 - val_roc: 000nan
Epoch 73/100 loss: 0.5321 - acc: 1.0000 - val_loss: 0.4355 - val_acc: 1.0000 - val_roc: 000nan
Epoch 74/100 loss: 0.4466 - acc: 1.0000 - val_loss: 0.4963 - val_acc: 1.0000 - val_roc: 000nan
Epoch 75/100 loss: 0.3814 - acc: 1.0000 - val_loss: 0.3516 - val_acc: 1.0000 - val_roc: 000nan
Epoch 76/100 loss: 0.3500 - acc: 1.0000 - val_loss: 0.4960 - val_acc: 1.0000 - val_roc: 000nan
Epoch 77/100 loss: 0.3406 - acc: 1.0000 - val_loss: 0.4104 - val_acc: 1.0000 - val_roc: 000nan
Epoch 78/100 loss: 0.4097 - acc: 1.0000 - val_loss: 0.3950 - val_acc: 1.0000 - val_roc: 000nan
Epoch 79/100 loss: 0.4751 - acc: 1.0000 - val_loss: 0.4306 - val_acc: 1.0000 - val_roc: 000nan
Epoch 80/100 loss: 0.3236 - acc: 1.0000 - val_loss: 0.4117 - val_acc: 1.0000 - val_roc: 000nan
Epoch 81/100 loss: 0.3355 - acc: 1.0000 - val_loss: 0.3545 - val_acc: 1.0000 - val_roc: 000nan
Epoch 82/100 loss: 0.4293 - acc: 1.0000 - val_loss: 0.3483 - val_acc: 1.0000 - val_roc: 000nan
Epoch 83/100 loss: 0.3347 - acc: 1.0000 - val_loss: 0.4013 - val_acc: 1.0000 - val_roc: 000nan
Epoch 84/100 loss: 0.3636 - acc: 1.0000 - val_loss: 0.3877 - val_acc: 1.0000 - val_roc: 000nan
Epoch 85/100 loss: 0.4909 - acc: 1.0000 - val_loss: 0.3191 - val_acc: 1.0000 - val_roc: 000nan
Epoch 86/100 loss: 0.4887 - acc: 1.0000 - val_loss: 0.4015 - val_acc: 1.0000 - val_roc: 000nan
Epoch 87/100 loss: 0.3689 - acc: 1.0000 - val_loss: 0.3816 - val_acc: 1.0000 - val_roc: 000nan
Epoch 88/100 loss: 0.4261 - acc: 1.0000 - val_loss: 0.4574 - val_acc: 1.0000 - val_roc: 000nan
Epoch 89/100 loss: 0.3855 - acc: 1.0000 - val_loss: 0.4199 - val_acc: 1.0000 - val_roc: 000nan
Epoch 90/100 loss: 0.4726 - acc: 1.0000 - val_loss: 0.4844 - val_acc: 1.0000 - val_roc: 000nan
Epoch 91/100 loss: 0.2764 - acc: 1.0000 - val_loss: 0.3909 - val_acc: 1.0000 - val_roc: 000nan
Epoch 92/100 loss: 0.3507 - acc: 1.0000 - val_loss: 0.3096 - val_acc: 1.0000 - val_roc: 000nan
Epoch 93/100 loss: 0.3022 - acc: 1.0000 - val_loss: 0.3459 - val_acc: 1.0000 - val_roc: 000nan
Epoch 94/100 loss: 0.4090 - acc: 1.0000 - val_loss: 0.3050 - val_acc: 1.0000 - val_roc: 000nan
Epoch 95/100 loss: 0.3081 - acc: 1.0000 - val_loss: 0.3604 - val_acc: 1.0000 - val_roc: 000nan
Epoch 96/100 loss: 0.3630 - acc: 1.0000 - val_loss: 0.2944 - val_acc: 1.0000 - val_roc: 000nan
Epoch 97/100 loss: 0.5035 - acc: 1.0000 - val_loss: 0.3200 - val_acc: 1.0000 - val_roc: 000nan
Epoch 98/100 loss: 0.4066 - acc: 1.0000 - val_loss: 0.3779 - val_acc: 1.0000 - val_roc: 000nan
Epoch 99/100 loss: 0.3435 - acc: 1.0000 - val_loss: 0.3576 - val_acc: 1.0000 - val_roc: 000nan
Epoch 100/100 loss: 0.3131 - acc: 1.0000 - val_loss: 0.3457 - val_acc: 1.0000 - val_roc: 000nan
Test score: 36.65938675403595
Test accuracy: 100.0
Test ROC: nan
GRU(
(gru): GRU(7, 4)
(linear): Linear(in_features=4, out_features=2, bias=True)
)

How do you calculate the AUC?
Is the loss shrinking to zero or a really low number?

from sklearn import metrics
fpr, tpr, _ = metrics.roc_curve(y, yy)
        roc_auc = metrics.auc(fpr, tpr)

it was added to evaluation function, sorry I did not understand the second question

I think the AUC and ROC are not defined for a single point.
Do you get any warnings?

For the second question:
Your accuracy is 100% from the first epoch and just switches to 0% two times.
If you look at the training loss, do you see it approaching zero?

y is the test labels and yy is the predicted ones from the model,

        y_pred = self.predict(X)
        yy=y_pred.detach().numpy()[:,1].flatten()

y_pred has dimensions of (number of test samples, 2) where 2 is the number of classes

Have you used any activation funtion in self.predict?
As far as I know, roc_curve needs the probabilities. So maybe you would like to use F.softmax(y_pred, dim=1), if y_pred are logits.

Actually no I am not getting warnings, actually I define AUR, ROC only in the evaluate function which we call it twice when validating the model after the training and the testing.
I dont think that the loss approaching to 0, the values in the run not percentage, the minimum so far 31%
do you think I have to increase the number of epochs to see it it approach 0?

Actually no I am not using any activation function, but I am using CrossEntropyLoss which should apply softmax already

CrossEntropyLoss applies LogSoftmax on the input, but your model output will be logits.
Therefore you would need to transform them into probabilities using Softmax to calculate the ROC/AUC.

Yes, it would be a good idea to make sure the loss approaches zero.
Changing the learning rate, increasing the epochs, etc. might help.

So I added a softmax at the end of the model, but still I am getting nan for ROC/AUC
and regarding the loss, Yes it approaching 0

Sorry for the confusion. You shouldn’t add it at the end of your model, but just for the calculation of the ROC/AUC.
It’s good if it’s approaching zero, so the sanity check was successful.
Now you could try to scale your problem up, i.e. give it more data and see, if it’s still capable of learning.

Actually I applied the softmax on the predicted data before the calculations of ROC/AUC but still for one sample for testing and training, it gives nan. However, when I increase the number of samples it start giving new good values like above 70% but the loss still also high?

I am not sure if this information might help but when I calculated the confusion matrix for the testing data
I got this results

 val_loss: 63.5753 - val_acc: 63.3333 - val_roc: 67.1875
 avg_correct: 63.3333 - avg_wrong: 36.6667 -

this is the way of calculating the metric
cm = metrics.confusion_matrix(y,yy.round(),labels= [0,1])
where y is the true labels and yy is the predict value

        y_pred=F.softmax(y_pred)
        yy=y_pred.detach().numpy()[:,1].flatten()