RuntimeError: Expected hidden[0] size (1, 64, 256), got (64, 256)

I unable to solve this problem, i even tried with print all the h_0 , and decoder has batch_first = True.
I have two encoder and after concatenating the representation of these two , i will get the required output.
my final encoder shape after concatenation [64,43,256] : [B x Seq_len x hidden_dim ]

here is the code snippet:

def forward(self, input_src, input_trg, courteous_template, ctx_mask=None, trg_mask=None):
“”“Propogate input through the network.”""
src_emb = self.src_embedding(input_src)
trg_emb = self.trg_embedding(input_trg)
temp_emb = self.temp_embedding(courteous_template)

    self.h0_encoder, self.c0_encoder = self.get_state(input_src)
    self.h1_encoder, self.c1_encoder = self.get_courteous(courteous_template)

    src_h, (src_h_t, src_c_t) = self.encoder(
        src_emb, (self.h0_encoder, self.c0_encoder)
    )
    

    temp_h, (tmp_h_t, tmp_c_t) = self.temp_encoder(
        temp_emb, (self.h1_encoder, self.c1_encoder)
    )

    out = torch.cat((src_h, temp_h),1)


    out = out.reshape(out.size(1),
    	out.size(0),out.size(2))

    # print(out.size())

    h_t = out[-1]
    h = h_t.view(h_t.size(0),h_t.size(1))



    trg_h, (_,_) = self.decoder(
    	trg_emb, h_t.view(
    		h_t.size(0),
    		h_t.size(1))
    	)

I got this error while training
RuntimeError: Expected hidden[0] size (1, 64, 256), got (64, 256),
i just can’t find out what is the problem?