Is my model really performing well?

Hi, I am wondering if there is anything wrong with my training as it achieves very high accuracy from the starting itself? The logs for training on CIFAR10 looks like this as below-

Files already downloaded and verified
Files already downloaded and verified

Epoch 000:
Training Loss: 64.2905438385933 | Training Accuracy: 91.71
Total Test Loss: 2.600952295586467 | Test Accuracy: 99.82
Saving...

Epoch 001:
Training Loss: 1.0865276760887355 | Training Accuracy: 99.81
Total Test Loss: 3.7665871270000935 | Test Accuracy: 99.92

Epoch 002:
Training Loss: 0.18417388796842715 | Training Accuracy: 99.986
Total Test Loss: 1.7079168632626534 | Test Accuracy: 99.96

Epoch 003:
Training Loss: 0.09753012470901012 | Training Accuracy: 99.994
Total Test Loss: 1.410150108858943 | Test Accuracy: 99.96

The good part is, after downloading the model and testing on examples it still gives the same accuracy. Still, I suspect if there is anything basic that I might be missing and all this accuracy is because of an error.

The model is stacked model of an autoencoder and a classifier and involves a custom loss for autoencoder including MSE loss and Cross-Entropy loss for Classifier.

I am looking for any suggestions or checks you may have to advise to understand the correctness of my pipeline.

Training Pipeline:

    for epoch in range(EPOCH+1):
        global best_train_acc
        running_loss, train_loss, correct, total = 0.0, 0, 0, 0
        autoencoder.train()
        for i, (inputs, labels) in enumerate(trainloader):
            inputs, labels = inputs.to(device), labels.to(device)
            encoded, decoded, _, _, _ = autoencoder(inputs, labels)
            loss_ae = custom_loss_fn()

            output_classifier = classifier(encoded)
            loss_classifier = ce(output_classifier, labels) # classifier loss
            
            optimizer.zero_grad()
            optimizer_classifier.zero_grad()
            
            loss_ae.backward(retain_graph=True)
            loss_classifier.backward()
            optimizer.step()
            optimizer_classifier.step()

            train_loss += loss_classifier.item()
            _, predicted = output_classifier.max(1)
            total += labels.size(0)
            correct += predicted.eq(labels).sum().item()

and testing in the same way just I add;
autoencoder.eval() and with torch.no_grad()

The code looks generally alright. I don’t know, how custom_loss_fn calculates the loss without any parameters, but you might use globals for them.
You should probably also call classifier.eval() during the test case.

A minor concern might be the comparably high test loss and the high test accuracy, which don’t seem to really correlate well, so you could check the test loss calculation and accuracy again.