Dev Loss decreases along with training loss then it becomes constant

Hello,
I am working on a computer vision project related to face anti-spoofing. My neural network is inspired by ResNet50 architecture. During training, the Dev loss and training loss decreases but at some point it almost become constant and the plots becomes almost parallel to each other. I have attached the image for reference. I have tried various regularization techniques like weight decay in the optimizer as well as dropouts in the neural network.

I am not able to understand the reason behind this behavior and what can I do to overcome this problem?