Apply LSTM in inner dimensions

Normally, we have data like

inputs = [
  ['how', 'are', 'you', '?'],
  ['I', 'am', 'good', '.'],
  ...
]

the dimension is (batch size, seq len)
So, we can apply LSTM directly

self.lstm = nn.LSTM()
self.lstm(inputs)

However, I have data like

inputs = [
  [
    ['U.S.', 'president'], ['is', 'not'], ['clinton', 'hillary']
  ],
  [
    ['The', 'boss'], ['likes'], ['apple', 'pies'],
  ],
  ...
]

I’d like to apply LSTM in the most inner dimension such as [‘U.S.’, ‘president’] and [‘is’, ‘not’], and so on.
And then, I’ll apply attention for [ output of LSTM(['U.S.', 'president']), output of LSTM(['is', 'not']), output of LSTM(['clinton', 'hillary']) ].

My question is that how to apply LSTM for the inner dimensions.

Please help!

Perhaps using transpose and view to change the dimensions?

1 Like

Thank you!
I’ll try it!