Dimension Error in seq2seq Decorder model.!

Hi All,

I am completely noob in NLP domain. I am trying to implement seq2seq model.

Below is the code for LSTM Decorder

class Decoder(nn.Module):
    def __init__(self, output_dim, emb_dim, hid_dim, n_layers):
        super().__init__()
        
        self.emb_dim = emb_dim
        self.n_layers =  n_layers
        self.hid_dim = hid_dim
        
        self.embed = nn.Embedding(output_dim,emb_dim)
        self.RNN = nn.LSTM(emb_dim,hid_dim,n_layers)
        self.fc = nn.Linear(hid_dim,output_dim)
        
    def forward(self, x,hidden,cell): # 
        
    
        x = x.unsqueeze(0)
        embedding = self.embed(x)
        # Now the output is (1,N,Hidden_siize)
        ## https://buomsoo-kim.github.io/attention/2020/01/25/Attention-mechanism-4.md/
        o,(ht,ct) = self.RNN(embedding,(hidden,cell))
        pred = self.fc(o.squeeze(0))
        return pred,ht,ct

I like to understand what is the use of x = x.unsqueeze(0) and if it should be use it convert 3 dim into 4-dim where nn.LSTM only takes 3-dim.

Sorry for stupid question.!!