Sudden Decrease in Training Accuracy

I am using logsoftmax and NLLLoss. for my classification problem (50 classes)
My training accuracy decreased after 88 epoch.
In 88 epoch: Accuracy is 51 %
Maximum in 84 epoch : 71 %

Suddenly in 89 epoch accuracy is 1.34 %
stuck at 1.34 after that (>89 epoch)

learning_rate = 1e-4
optimizer = optim.SGD(model.parameters(), lr=0.0015, momentum=0.9)

Please suggest, why It is like thar

mind sharing your code for the training loop and accuracy function?

my imageloader getitem()
img1 = self.loader(os.path.join(self.root1, path1))
img1 = np.asarray(img1)
img1 = torch.from_numpy((img1)).float()
target = torch.eye(50)[target]

return img1, target

critiria = nn.NLLLoss()
learning_rate = 1e-4
optimizer = optim.SGD(model.parameters(), lr=0.0015, momentum=0.9)

for batch_idx, (imgs1,labels1) in enumerate(train_loader):

    img_org,target = imgs1.to(device,dtype=torch.float), labels1.to(device)
    img_org = img_org.permute(0, 3, 1, 2)
    optimizer.zero_grad()
    output = model(img_org)
    loss = critiria(output, target)
    loss.backward()

    _, actual = torch.max(target.data, 1)
    _, predicted = torch.max(output.data, 1)
    total = float(len(target))
    correct = (predicted == actual).sum()
    total_correct_test = total_correct_test + np.float(correct.cpu().numpy())


beta =  (len(train_loader.dataset))/ batchsize   
print(train_loss)
train_loss /= beta
acc= 100. * total_correct_train / len(train_loader.dataset)
np_acc=np.around((100* total_correct_train / len(train_loader.dataset)),decimals=2)

print(“Accuracy==”,np_acc)

from what I can see, your main loop is accumulating total_correct_test but you are printing total_correct_trainnp.around(acc,decimals2), not too sure what is happening here.

total_correct_test is total number of images classified correctly.

acc is just % of the total_correct_test
np.acc is rounding of total accuracy of decimal = 2

I have similiar issue.
Is this problem solved?