How large on earth should nn.Embedding be?

Hi all,

I have pretrained embedding vector and used self.embedding.from_pretrained(emb_vecs) to copy the vector, whose shape is (172803, 300).

However, when I try to get the embedding for the following shaped tensor

print(q.size()) # output is (32, 9)
q_embedding = self.embedding(q)

I get such errors

RuntimeError: index out of range at /pytorch/aten/src/TH/generic/THTensorMath.cpp:352

This seems strange because the official document about Embedding tells me that the tensor can be arbitrary shape.

I also did the following simple experienment

Python 2.7.15rc1 (default, Apr 15 2018, 21:51:34) 
[GCC 7.3.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> embedding = torch.nn.Embedding(10, 3)
>>> input = torch.LongTensor([[1,2,4,5],[4,3,2,9]])
>>> embedding(input)
tensor([[[-0.9792, -1.5882, -0.0207],
         [ 1.9464, -0.6515, -1.1061],
         [ 1.2522, -0.2758,  0.3255],
         [ 0.2748, -1.6323,  0.0761]],

        [[ 1.2522, -0.2758,  0.3255],
         [ 1.3587, -0.9372,  0.9779],
         [ 1.9464, -0.6515, -1.1061],
         [-0.3707, -0.4403, -0.4675]]], grad_fn=<EmbeddingBackward>)
>>> input1 = torch.LongTensor([[1,2,4,5],[4,3,2,9],[13,14,15,16]])
>>> embedding(input1)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/fma/tensorflow/pytorch/local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/fma/tensorflow/pytorch/local/lib/python2.7/site-packages/torch/nn/modules/sparse.py", line 110, in forward
    self.norm_type, self.scale_grad_by_freq, self.sparse)
  File "/home/fma/tensorflow/pytorch/local/lib/python2.7/site-packages/torch/nn/functional.py", line 1110, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: index out of range at /pytorch/aten/src/TH/generic/THTensorMath.cpp:352
>>> embedding = torch.nn.Embedding(17, 3) # Must be 17, smaller value will cause error
>>> embedding(input1)
tensor([[[-0.1267,  0.5442, -1.5968],
         [-0.2980, -0.8039,  0.7393],
         [ 0.8526, -1.3021, -0.9185],
         [-0.8957,  1.2497, -1.1549]],

        [[ 0.8526, -1.3021, -0.9185],
         [ 0.4550, -0.5091, -0.5557],
         [-0.2980, -0.8039,  0.7393],
         [-0.2894, -0.2622, -0.9497]],

        [[-0.2754, -0.8513, -0.7684],
         [-0.9200, -1.2583,  2.5170],
         [ 0.7666,  0.4166,  0.8420],
         [-0.3305,  0.9930,  0.1318]]], grad_fn=<EmbeddingBackward>)

That looks confusing, what is the exactly rule of using nn.Embedding?

Thanks

Never mind, I forgot to delete an old vocab_size variable, which is much smaller than the actual size of vocabulary. After I delete it, the error is gone.