Training loss is not being reduced

Hi, I am a beginner in the deep learning field. I learned many things via PyTorch forum. thanks to all.
Now I am working with a video captioning framework. I did training with cross-entropy loss and Adam optimization. But the issue is that the loss was 8.7296 at the beginning and it became reduced to 7.87 -7.93 range within an epoch of training. After that, the loss is dangling in this range.
Can anybody suggest me, what type of changes I could have done to reduce the loss.?

My model consists of an encoder and a decoder. The encoder takes CNN feature vectors of video as input and consists of a fc layer. the decoder consists of lstm network with 64 layers.