Model stopped learning after running several times

I am training the model to learn to recognize alphabet pictures (Size: 80 by 80). They are stored in 26 sub-folders.
It was able to run and gave me an accuracy of 90% when testing. However, after running the training code below for a few times (each time train on a new set of alphabet images to get a new model), the model suddenly stopped learning. When I try to run it with data that I used for the 1st time (the data that gave me 90%), the accuracy became only 3.84%. It is no longer learning, no matter what data I give it, it stayed 3.84% forever. I really don’t understand why. I checked the code a few times but still cannot find the reason.

This is the model.

import torch.nn as nn

class CNN4CAM(nn.Module):
def init(self):
super(CNN4CAM, self).init()
self.conv = nn.Sequential(
nn.Conv2d(1, 32, 3, 1, 1),
nn.ReLU(True),
nn.Conv2d(32, 64, 3, 1, 1),
nn.ReLU(True),
nn.MaxPool2d(2, 2),

        nn.Conv2d(64, 128, 3, 1, 1),
        nn.ReLU(True),
        nn.Conv2d(128, 256, 3, 1, 1),
        nn.ReLU(True),
        nn.MaxPool2d(2, 2),

        nn.Conv2d(256, 512, 3, 1, 1),
        nn.ReLU(True),
        nn.Conv2d(512, 1024, 3, 1, 1),
        nn.ReLU(True),
        nn.MaxPool2d(2, 2)

    )
    self.avg_pool = nn.AvgPool2d((10, 10))  # img_size // 8
    self.classifier = nn.Linear(1024, 26)

def forward(self, x):
    features = self.conv(x)
    flatten = self.avg_pool(features).view(features.size(0), -1)
    output = self.classifier(flatten)
    return output, features

This is the code for training

import cv2
import torch
import numpy as np
from torch.nn import functional as F
import torch.nn as nn
from newCNNmodel import CNN4CAM
import torchvision.transforms as transforms
from torchvision import datasets

transform = transforms.Compose([
transforms.Grayscale(),
transforms.ToTensor()
])
train_dir = ‘E:/alpTrain/blur0trainingSh’
train_set = datasets.ImageFolder(train_dir, transform)
train_loader = torch.utils.data.DataLoader(train_set, batch_size=10, shuffle=True)

device = torch.device(“cuda:0” if torch.cuda.is_available() else “cpu”)
cnn = CNN4CAM().to(device)
criterion = nn.CrossEntropyLoss().to(device)
optimizer = torch.optim.Adam(cnn.parameters(), lr=0.001)

print(‘Training Starts.’)
EPOCH = 5

cnn.train()
for epoch in range(EPOCH):
epoch_loss = 0
for i, (images, labels) in enumerate(train_loader):
images, labels = images.to(device), labels.to(device)
optimizer.zero_grad()
outputs, _ = cnn(images)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
epoch_loss += loss.item()

avg_epoch_loss = epoch_loss / len(train_loader)
print('Epoch [%d/%d], Loss %.4f' % (epoch+1, EPOCH, avg_epoch_loss))

I cannot see any obvious errors in your code.
Did you change anything in the code or your machine setup before the accuracy was reduced?
If not I would guess that your training might be sensitive to the seed and would recommend to play around with some hyperparameters such as the learning rate to stabilize it.

Thank you for your response. I talked to my friends yesterday about this and we found the solution exactly as you said. I decreased or to 0.0001 and increased epoch from 5 to 20.