Accuracy stuck for Recurrent Convolution Neural network

Hi, I am implementing RCNN for emotion classification in text. I am using Bi-directional LSTM then apply max-pool to get dominant features and pass it to the FC layers for classifying. Right now I am stuck at a point after achieving accuracy 83 my model gets stuck at that point. Accuracy and loss doesn’t change after that. I have attached an image of performance. Kindly suggest me anything that would help. Thank you.

Optimizer: SGD
Loss: Focal Loss

Metrics (1)

I’m not saying there isn’t anything that can be tweaked, but this looks like a pretty normal result for such a text classification task to me. Training loss going towards 100%, and test/validation loss around 80%+. Three related things to consider:

  • Sentiment or emotion are highly subjective. I bet the if you look at the dataset you wouldn’t agree with all labels. Alternatively, give the same text to 3 different people, and in many cases you won’t get a perfect agreement, particularly for emotion where you typically have 4-7 classes.

  • Language can be extremely subtle and there are stylistic devices (sarcasm, irony, cynicism, humor) which are arbitrary difficult to learn for a network, particularly without virtually unlimited training data.

  • What a text conveys often goes beyond the words in the text, but relies on a share knowledge and understanding. Consider the simple sentence “There’s a spider on my pillow.” On it’s, this sentence is as neutral as it gets. It’s the general knowledge that most people are afraid of spiders that make this sentence associated with “fear”.

Language is way too expressive to expect a network to find simple patterns.

Makes sense. I guess