Model weights not getting updated

I am a rookie in Pytorch. I’m trying to build a simple NMT model with attention applied to bidirectional LSTM.

 def forward(self, x, h, c, s, loss_fn, y_true):
        loss_val=0
        self.batch_size = x.shape[0]
        activations = self.encoder(x, h, c)
        for tx in range(self.max_tx):
            alphas = self.attention(activations, s)
            shape = activations.shape
            activations_dot = activations.reshape(shape[1], shape[0], shape[2])
            context = torch.tensordot(alphas, activations_dot, dims=2).unsqueeze(dim=1)
            output, s = self.rnn(context, s)
            s = self.tanh(s)
            y = self.dec_fc2(s)
            y = self.dec_softmax(y)
            loss_val = loss_val + loss_fn(y.squeeze(), y_true[:, tx])
        return loss_val

Here’s my train block

for epoch in range(epochs):
        loss_fn = 0
        h0, c0 = model.init_hidden()
        s = model.init_hidden(1)
        for xt, yt in train_loader:
            loss_fn =  model.forward(xt, h0, c0, s, loss, yt)
            optimizer.zero_grad()
            loss_fn.backward()
            optimizer.step()
        print(loss_fn)

loss is NLLloss.
The loss is same or almost same at every epoch(minute difference in order 1e-6) and I tried printing the parameters which are same before and after backward. I need help solving this issue.
Thanks in advance.

Without seeing the model and any comments, it’s difficult to guess what’s going on here. Maye you want to have a look at the working notebook implementing NMT using an RNN-based encoder-decoder architecture.