How to visualise the impact of input variables on the output

Hi all,

I developed an LSTM model, and I am trying to see how my inputs affect the results. I have tried to use saliency, as it is used for CNN.

Which I am now quite confused, the values of saliency is based on the gradient back-propagation of the output of the model, for instance, in CNN saliency can be used as heat-map to represent which part of the image is prominent. However, this is not what I want.

Or have I misunderstood saliency?