Word2vec vectors differ from ones in nn.Embedding

I am training an lstm using pertained word2vec vectors in Pytorch. After reading up on it, I decided the best way to proceed was to feed the pertained vectors to nn.Embedding. In order to keep track of which vector corresponds to which word, I first created a dictionary of word:index in word2vec object’s vocabulary, and then use this index to refer to the word in nn.Embedding. Upon debugging my code, however, I found that the vectors in nn.Embeddings are different to the ones in the word2vec model. I can’t figure out if this is due to a change in the format or whether I’ve made a mistake.

My word:index dictionary :

self.word_index_dictionary = {word: index for index, word in enumerate(self.w2v_model.wv.vocab)}

method for loading embeddings:

def load_embeddings(self):
    print("printing self.word_index_dictionary")
    print(self.word_index_dictionary.items())
    embeddings=np.zeros((len(self.word_index_dictionary),self.cfg.lstm.embedding_dim))
    for word,index in self.word_index_dictionary.items():
        vector = np.array(self.w2v_model[word], dtype='float64')
        embeddings[index]=vector
        return embeddings

feeding the embeddings to nn.Embedding:

    word2vec_embeddings_in_same_order_as_w2v_vocab=self.load_embeddings()
    self.embedding=nn.Embedding(len(self.word_index_dictionary),self.cfg.lstm.embedding_dim)
    self.embedding.weight.data.copy_(torch.from_numpy(word2vec_embeddings_in_same_order_as_w2v_vocab))

I try to inspect the vector for a word in both the word2vec object and the one in Embedding dim, expecting them to be the same:

    output_word_index = self.word_index_dictionary[output_word]
    x = output_word_index.to(self.cfg.lstm.device)
    x = self.embedding(x).view(self.cfg.lstm.batch_size, -1, self.cfg.lstm.embedding_dim)

in word2vec model:

self.w2v_model[output_word]

The two vectors have the same dimensions, but the values are different:

From nn.Embedding:

tensor([[[ 0.0620, -0.0711, 0.1139, -0.0936, 0.1118, 0.1150, -0.1091, -0.0958, 0.0875, -0.0959, -0.0397, -0.0891, 0.0685, 0.1204, 0.1070, 0.1117, 0.0749, -0.1213, 0.0350, 0.1238, 0.0788, -0.0951, 0.0550, 0.0341, -0.0847, -0.1133, -0.0573, -0.1167, -0.0510, 0.0122, -0.0675, -0.0099, -0.0481, 0.0742, -0.1072, 0.1219, 0.1177, 0.1120, 0.1071, -0.0270, -0.0722, 0.1173, -0.1178, -0.1295, 0.1192, -0.0634, 0.0901, -0.0853, -0.0990, -0.0598, -0.0068, 0.0056, -0.0435, -0.0031, 0.0750, -0.0634, 0.0778, -0.0417, 0.0353, -0.0390, -0.0537, 0.1324, -0.0771, 0.0400, -0.1188, 0.0762, 0.0881, -0.1260, -0.0875, -0.1367, 0.0242, -0.1204, 0.0496, 0.1155, -0.0094]]])

From the word2vec model:

[-0.00605829 0.00661848 -0.00011414 -0.00506099 0.00113591 -0.00095635 -0.00200359 0.00257369 -0.00312977 0.00681804 -0.00061551 0.00023257 0.00546775 -0.00027482 -0.00469235 0.00086751 -0.0044327 0.00429228 -0.00217963 0.00783066 -0.00583101 0.00266223 0.00201999 0.00144446 -0.0055535 -0.00289808 -0.00613204 -0.00612309 0.00544106 -0.00662005 -0.00191573 -0.00677378 -0.00342982 -0.00493141 0.00669031 0.00063995 0.00286465 0.00315713 0.00311586 0.00500501 -0.00551272 0.00138103 0.0026006 -0.00531218 0.00264457 0.00443299 -0.00166974 0.00827436 0.00490502 -0.00098338 -0.00038242 -0.00307148 0.00623394 -0.00268476 0.00056968 -0.00608332 0.00559199 0.00572456 -0.00977404 0.00621489 0.00333743 -0.00340542 -0.00151842 -0.00240097 -0.00404807 -0.00048501 0.0043892 0.00011252 0.00044196 -0.00463731 -0.0031015 0.00352162 -0.0040762 0.00301864 -0.00336199]

Is this a tensor vs Numpy format issue, or have I done something wrong?