FastRNNs Benchmark - custom lstms reverse

I’m trying to write a custom LSTM with TorchScript. Naturally, my first impulse was to take the fastRNNs benchmark custom_lstms file and to run it. However, it fails in both torch 1.4 and 1.5.

RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
File “C:/Users/anarc/git/Elmo-Training-PyTorch/elmo/modules/”, line 298, in forward
for rnn_layer in self.layers:
state = states[i]
output, out_state = rnn_layer(output, state)
~~~~~~~~~ <— HERE
output_states += [out_state]
i += 1
File “C:/Users/anarc/git/Elmo-Training-PyTorch/elmo/modules/”, line 238, in forward
for direction in self.directions:
state = states[i]
out, out_state = direction(input, state)
~~~~~~~~~ <— HERE
outputs += [out]
output_states += [out_state]
File “C:/Users/anarc/git/Elmo-Training-PyTorch/elmo/modules/”, line 210, in forward
def forward(self, input, state):
# type: (Tensor, Tuple[Tensor, Tensor]) -> Tuple[Tensor, Tuple[Tensor, Tensor]]
inputs = reverse(input.unbind(0))
~~~~~~~ <— HERE
outputs = jit.annotate(List[Tensor], [])
for i in range(len(inputs)):
File “C:/Users/anarc/git/Elmo-Training-PyTorch/elmo/modules/”, line 90, in reverse
def reverse(lst):
# type: (List[Tensor]) -> List[Tensor]
return lst[::-1]
~~~~~~~~ <— HERE
RuntimeError: invalid vector subscript

To my understanding, negative strides are neither supported, nor on the roadmap (according to this). So this doesn’t surprise me, but why is that reverse function there then?

Am I missing something, is there a trick to reversing a Tensor like that?

Obviously, that would make creating a backwards layer easier (and probably faster than torch.flip, since it allocates new memory apparently)