Model.forward() returning batch of tensors

Hi,
I want help to translate a PyTorch model into c++ for inference. The forward method is as below:
def forward(self,inputs):
blen = inputs.size(0) #1
tlen = inputs.size(1)#no of frames (no of samples/128)

    h1 = torch.Tensor(blen,L1).uniform_(-1.0,1.0).to(self.dev)
    h2 = torch.Tensor(blen,L2).uniform_(-1.0,1.0).to(self.dev)
    h  = []

    for t in range(tlen):
        x  = inputs[:,t,:]
        h1 = self.gru1(x,h1)
        h2 = self.gru2(h1,h2)
        h3 = self.fc3(h2).clamp(min=0)
        h4 = self.fc4(h3)
        h.append(h4)
    y = torch.stack(h).permute(1,0,2)
    return F.log_softmax(y,dim=-1)

The returned tensor dimension is (1,24,2) , where 24 is number of frames for a certain wav file.
In the C++ code,
auto input_tensor = torch::from_blob(ai->input_data_cat, {1,24,64},at::kFloat);
std::vectortorch::jit::IValue inputs;
inputs.push_back(input_tensor);
auto output = model.forward(inputs).toTensor();
output dimension is only (1,1,2), where as I am expecting (1,24,2).

How can I fix this ?
I am using libtorch-cxx11-abi-shared-with-deps-1.4.0+cpu.zip and torch ver 1.4

Regards
JK