How to interpret layerwise relevance propogation for RNN classification?

I am deriving Layerwise Relevance Propogation for an RNN classification model.

model configuration:

dnn(
  (encoder): Encoder(
    (embedding): Embedding(44, 32, padding_idx=0)
    (gru): GRU(32, 64)
  )
  (fc): ModuleList(
    (0): Linear(in_features=19584, out_features=128, bias=True)
    (1): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): Linear(in_features=128, out_features=128, bias=True)
    (3): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (4): Linear(in_features=128, out_features=2, bias=True)
  )
)

Max sequence length = 306

Dictionary size = 44

Dictionary:
{'_': 0, '?': 1, '\t': 2, '\n': 3, '#': 4, '(': 5, ')': 6, '+': 7, '-': 8, '1': 9, '12': 10, '2': 11, '21': 12, '23': 13, '24': 14, '3': 15, '32': 16, '34': 17, '35': 18, '4': 19, '43': 20, '5': 21, '6': 22, '64': 23, '7': 24, '73': 25, '=': 26, 'B': 27, 'Br': 28, 'C': 29, 'Cl': 30, 'F': 31, 'I': 32, 'N': 33, 'O': 34, 'P': 35, 'S': 36, '[': 37, ']': 38, 'c': 39, 'i': 40, 'n': 41, 'o': 42, 's': 43}

For an original input of ‘c1ccccc1C(C(NC)C)O’, tokenized input is as follows:

tensor([[ 0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
          0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
          0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
          0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
          0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
          0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
          0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
          0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
          0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
          0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
          0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
          0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
          0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
          0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
          0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
          0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  2, 39,
          9, 39, 39, 39, 39, 39,  9, 29,  5, 29,  5, 33, 29,  6, 29,  6, 34,  3]])

This input has been predicted as “positive” by above classification model.

My layerwise relevance propogation output is

R[0].shape
torch.Size([44, 306])

Lets look at position 305 in relevance tensor

R[0][:,304]
tensor([     0.0000,  -2519.1868,  -6721.7515,   1773.0427,   2274.4043,
      1315.9221,    689.7231,  -3970.1589,   -290.6332,   1902.7957,
     -3552.2190,   2137.6050,   7068.9321,    824.2842,  -2354.6023,
      -888.0222,    183.6073,  -1015.4807,  -1693.0706,   5424.1680,
     -1585.7656,    456.7188,   -424.3035,   -849.8544,  -3942.9111,
     -2586.9678,     38.4876,   2865.9019,  -4129.2144,  -6311.8076,
      3502.2493,   -495.0044,  -1296.6990,   5003.5410,   4107.5718,
      1664.5162, -10036.3223,   1197.8634,  -1420.5979,  -2937.0493,
     -5572.0791,   4293.1611,   5572.4443,  -3519.6023],
   grad_fn=<SelectBackward0>)

I want to find which of the 18 characters contributed most in classifying given input to positive class.
How can I intepret R[0] tensor? For shown R[0][:,304], how will I know which of these 44 relevance score are of my 305th token (i.e. 34 in tokenized input)? any help will be deeply appreciated!