I want to convert the following CBOW-model into skip-garam model

Hello guys, I hope you are doing well. I have implemented the following model of Continous-bag-of-words in pytorch. Now, I want to tweak the following model to make it a skip-gram model in pytorch but I am facing difficulty as to how to do this. Here is the model code and it’s training sequence.

import torch
import torch.nn as nn

CONTEXT_SIZE = 2 # 2 words to the left, 2 to the right
EMDEDDING_DIM = 500

raw_text = “”“We are about to study the idea of a computational process.
Computational processes are abstract beings that inhabit computers.
As they evolve, processes manipulate other abstract things called data.
The evolution of a process is directed by a pattern of rules
called a program. People create programs to direct processes. In effect,
we conjure the spirits of the computer with our spells.”“”.split()

By deriving a set from raw_text, we deduplicate the array

vocab = set(raw_text)
vocab_size = len(vocab)

word_to_ix = {word:ix for ix, word in enumerate(vocab)}
ix_to_word = {ix:word for ix, word in enumerate(vocab)}

data =
for i in range(2, len(raw_text) - 2):
context = [raw_text[i - 2], raw_text[i - 1],
raw_text[i + 1], raw_text[i + 2]]
target = raw_text[i]
data.append((context, target))

def make_context_vector(context, word_to_ix):
idxs = [word_to_ix[w] for w in context]
return torch.tensor(idxs, dtype=torch.long)

class CBOW(torch.nn.Module):
def init(self, vocab_size, embedding_dim):
super().init()

    #out: 1 x emdedding_dim
    self.embeddings = nn.Embedding(vocab_size, embedding_dim)
    self.linear1 = nn.Linear(embedding_dim, 128)
    self.activation_function1 = nn.ReLU()
    
    #out: 1 x vocab_size
    self.linear2 = nn.Linear(128, 100)
    self.linear3 = nn.Linear(100,vocab_size)
    self.activation_function2 = nn.LogSoftmax(dim = -1)
    

def forward(self, inputs):
    embeds = sum(self.embeddings(inputs)).view(1,-1)
    out = self.linear1(embeds)
    out = self.activation_function1(out)
    out = self.linear2(out)
    out = self.linear3(out)
    out = self.activation_function2(out)
    return out

def get_word_emdedding(self, word):
    word = torch.tensor([word_to_ix[word]])
    return self.embeddings(word).view(1,-1)

model = CBOW(vocab_size, EMDEDDING_DIM)

loss_function = nn.NLLLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.001)

#TRAINING
for epoch in range(50):
total_loss = 0

for context, target in data:
    context_vector = make_context_vector(context, word_to_ix)  

    log_probs = model(context_vector)

    total_loss += loss_function(log_probs, torch.tensor([word_to_ix[target]]))

#optimize at the end of each epoch
optimizer.zero_grad()
total_loss.backward()
optimizer.step()

#TESTING
context = [‘the’,‘spirits’,‘the’, ‘computer’]
context_vector = make_context_vector(context, word_to_ix)
a = model(context_vector)

Can someone help me that what and where should I make tweaks in this code in order to convert it into a simple skip-gram model. I have seen plenty of models on github but they include negative sampling. However, I just need simple skip-gram model. Any help would be appreciated. Thanks!!!