Getting different output in the NLP example Part-of-Speech Tagging

Hello!

I am following the NLP tutorials on Pytorch’s tutorials website. I am getting different output than what it should show, so I just copy pasted the whole code as it is and still the output is different.

My code is shared in this gist:

Example: An LSTM for Part-of-Speech Tagging

For the 1st sentence

[‘The’, ‘dog’, ‘ate’, ‘the’, ‘apple’]
[‘DET’, ‘NN’, ‘V’, ‘DET’, ‘NN’]

the output is coming as below:

tensor([[-0.7662, -0.6405, -4.8002],
[-2.7163, -0.0698, -6.6515],
[-3.1324, -5.7668, -0.0479],
[-0.0528, -3.3832, -4.0481],
[-2.4527, -0.0931, -5.8702]])

I am getting the sequence: 1 1 2 0 1 rather than 0 1 2 0 1

Can anyone please check this and point out why I am getting different output?