Train Accuracy decreses after some epochs

After running my model multiple time,

the train accuracy and test accuracy both decreasing after 4th epoch.
how it is possible, what could be reasons?

Train Accuracy
0,==Acc==,45.29
1,==Acc==,67.1
2,==Acc==,74.84
3,==Acc==,80.25
4,==Acc==,83.77
5,==Acc==,56.99
6,==Acc==,46.62
7,==Acc==,46.06
8,==Acc==,46.92
9,==Acc==,46.66
10,==Acc==,47.72
11,==Acc==,48.04
12,==Acc==,46.24
13,==Acc==,48.41
0,==Acc==,6.37
14,==Acc==,29.57
15,==Acc==,47.54
16,==Acc==,47.54
17,==Acc==,46.96
0,==Acc==,10.82
18,==Acc==,47.54

Test Accuracy
0,==Acc==,45.28
1,==Acc==,71.76
2,==Acc==,79.92
3,==Acc==,80.61
4,==Acc==,94.31
5,==Acc==,46.17
6,==Acc==,47.49
7,==Acc==,47.49
8,==Acc==,46.17
9,==Acc==,48.86
10,==Acc==,47.49
11,==Acc==,47.47
12,==Acc==,47.48
13,==Acc==,43.71
14,==Acc==,47.47
15,==Acc==,47.49
16,==Acc==,47.49
17,==Acc==,47.49
18,==Acc==,47.49.

I am using a simple custom crossentropyloss
def custom_categorical_cross_entropy(y_pred, y_true):
y_pred = torch.clamp(y_pred, 1e-9, 1 - 1e-9)
return -(y_true * torch.log(y_pred)).sum(dim=1).mean()

again if I use
def custom_categorical_cross_entropy(y_pred, y_true):
return -(y_true * torch.log(y_pred)).sum(dim=1).mean()

loss is coming NaN.

One more tried LogSoftmax and NLLLoss(). there alse after some epoch loss is NaN.

What could be possible issue.

There might be different reasons for the divergence, such as an exploding loss.
I would also not recommend to reimplement already provided loss functions, if the custom ones don’t add any new functionality (e.g. soft labels etc.), as e.g. nn.CrossEntropyLoss uses internally the log-sum-exp trick for numerical stability, which is often missing in custom implementations.

I have also tried LogSoftmax and NLLLoss(). In that case alse loss is coming as NaN.
My question is , is it problem with data or the model. or anything else?

I am using logsoftmax and NLLLoss.
My training accuracy decreased after 88 epoch.
In 88 epoch : Accuracy is 51 %
Suddenly in 89 epoch accuracy is 1.53 %