Training loss decrease slowly

Training loss decrease slowly with different learning rate. Optimizer used is adam. I tried with different scheduling scheme but it follow the same. I started with small dataset. If i need to converge loss i have to go with larger number of epochs but its time consuming.

One important point is, for all learning rates(0.01,0.001,0.0001,0.0008,0.008) loss follow the same pattern. On which parameter should i need to work to reduce loss faster?

Adam is tricky sometimes and in my experience it likes smaller lr. Have you tried good old SGD?

Yes but loss value is too much high.