Problem concerning using pre-trained embedding

Hi:
I was trying to use pre-trained word embedding inside a LSTM structure. At first, I added the embedding layer inside of the LSTM structure, which looks like this:

self.embedding_entity = nn.Embedding(embedding_entity_matrix.size(0), embedding_entity_matrix.size(1))
self.embedding_entity.weight = nn.Parameter(embedding_entity_matrix)

when given input (which originally is word sequence but gets converted by a word_index dictionary, and then also converted to a Variable )

it did not return the embedding results and reported an error that
File “/home/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py”, line 215, in call
var = var[0]
“TypeError: ‘int’ object has no attribute ‘getitem’”.

Therefore, I move the embedding layer outside of the LSTM structure. Given the same input, the results are returned correctly.

Could anyone help explain why I cannot successfully use the pre-trained embedding inside of the LSTM structure but can use it outside of the structure? Many thanks.