It show me error (attached image). Q1: why this embedding not accepting the tensors as my parameters? Q2: what is the difference between tf.nn.embedding_lookup() in tf and nn.embedding() in pytorch? Q3: Does pytorch support any function similar to np.asarray (code line 103) (now want to use torch.from_numpy, proceed only in pytorch)?
you are creating an embedding laye where the two first args (self.theta, pos_r) should be integers indicating first the number of embeddings, then the embeddings dim.
Here’s an example of how to use it:
# in __init__
num_embs = 3
emb_dim = 10
emb = nn.Embedding(num_embs, emb_dim)
...
# later in forward
x = torch.LongTensor([2,1,0])
A2: From what I see from tf.nn.embedding_lookup, they are basically the same thing.
The difference being that the lookup table initialization is automatic in PyTorch (based on the arguments the num_embeddings x embedding_dim param will be initialized from a standard normal distribution), while you have to pass the already initialized table in TF (the param argument).
A3: If I understand correctly you want to create a torch.Tensor from a list, similarly to what is done on line 103.
You can simply create the Tensor by passing your array list to the constructor:
list = [1,2,3]
tensor = torch.FloatTensor(list)
#or
tensor = torch.LongTensor(list)
Thanks mate. In attached link, kindly see lines (58, 59, 103) that used by the lines 112 & 113. He passed the tensor as parameters. When IO passed the tensor, it showed me error of - got (Tensor, Tensor)…?