Appear RuntimeError when loss.backward()

During training a RNN-based model, there is a RuntimeError in loss.backward().

"


File “/usr/local/lib/python2.7/dist-packages/torch/tensor.py”, line 102, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File “/usr/local/lib/python2.7/dist-packages/torch/autograd/init.py”, line 90, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: select(): index 10 out of range for tensor of size [10, 8, 12020] at dimension 0
"

I wonder why such error appears at loss.backward().
The data flow is convinced to be correct because of the right forward().

How to debug such an error?

1 Like

To whoever experiences this error:

in my case it was causes by incorrect parameters to rnn.pack_padded_sequence, specifically batch_first wasn’t set to True as my code expected.