Interpreting Loss value with my sensitivty and specificty

Hi all,

I am currently developing a LSTM model for binary classification, and I am at a point where I am not too sure what to expect from my loss value.

Here is my loss from training and validation: I run 1 epoch of validation before training loop to see how the model performs and as wanted validation has better performance as I have drop out = 0.1

Train Epoch: 47	Loss: 0.418231	TPR: 90.32258064516128	TNR: 93.67773677736777	Time: 0.015681
Val_loss: 0.402811	TPR: 90.625	TNR: 94.61122047244095	Time: 0.433588

Val_loss: 0.319000	TPR: 93.54838709677419	TNR: 94.24354243542436	Time: 0.443668

Train Epoch: 48	Loss: 0.444865	TPR: 90.625	TNR: 92.96259842519686	Time: 0.015649
Val_loss: 0.326329	TPR: 93.54838709677419	TNR: 94.31734317343174	Time: 0.443036

Val_loss: 0.331961	TPR: 90.32258064516128	TNR: 94.36654366543665	Time: 0.439770

Train Epoch: 49	Loss: 0.401345	TPR: 90.32258064516128	TNR: 93.82533825338253	Time: 0.016455
Val_loss: 0.369906	TPR: 90.625	TNR: 94.04527559055119	Time: 0.418198

The thin I am not too sure about is the loss value, I do not know whether this value is large or small but I can see that my TPR and TNR is high which is a good thing.

                pred = predictions.detach()
                pred[pred >0.5] = 1
                pred[pred<=0.5] = 0

                correct_indx_positive = pred[target == 1]                       # Should have batch_size * 1's
                correct_indx_negative = pred[target == 0]                       # Should have (batch_size*seq_length-batch_size) * 0's
  
                TPR = len(correct_indx_positive[correct_indx_positive == 1])/len(correct_indx_positive) 
                TNR = len(correct_indx_negative[correct_indx_negative == 0])/len(correct_indx_negative)

What I want to know is, what I would have to do in order to decrease loss value, is it just matter of trial and error with hyperparameters?

Any comments will be really helpfeul!