I am using the nn.Embedding class to generate some word embeddings, and i have tried two setting in my code:
1.
def init(self):
self.embedding = nn.Embedding(emb_idim, emb_odim)
def forward(self, indexs):
wemb = self.embedding(indexs)
def init(self):
self.embedding = nn.Embedding(emb_idim, emb_odim)
def forward(self, indexs):
query_one_hot = F.one_hot(indxes, max_num_of_indexes)
wemb = query_one_hot.float() @ self.embedding.weight
It seems that the wemb results are equal in 1 and 2 settings after one run, but i got the diferrent testing performance after one epoch.
Could anyone can tell me what’s wrong with my code ?