LSTM CNN Loss issue

Below is the LSTM layer i used for LSTM CNN
But i will have trouble calculating the loss due mistmach in target and input size.
Am i missing any step here

class lstm(nn.Module):
    def __init__(self,bs,num,  lstm1_input=1024,lstm_hidden=512):
        super().__init__()
        self.batch=bs
        self.num=num
        
       
        self.lstm = nn.LSTM(1024, 512 ,2,batch_first=True)# input elements,hidden state size
        #self.linear=nn.Linear(512,1)
        self.drop=nn.Dropout(0.5)
    def forward(self,input):
        print(input.size()) #see below
        x=input.view(self.batch,self.num,input.size(1))
        x,y=self.lstm(x)
        print(x.size()) #see below
        x = x[:,-1,:]
        print(x.size()) #see below
        x=self.drop(x)
        return x

torch.Size([192,1024])
torch.Size([16, 12, 512])
torch.Size([16, 512])

Unless your target size is the same as your lstm’s hidden dim, you need to apply a linear layer after the lstm to transform it to the right size and also get rid of the tanh activation coming out of the lstm.

nn.linear(512, TARGET_SIZE)

I also recommend removing the dropout if this is at the very end of your network.

i can very well use that… but batch size is not same as the target batch size…

Your sequence length seems to be 12 so you need to use every 12th other target in your target tensor, not the entire thing. Or you could make your input in the shape of (batch, seq, *inputshape) and output in the shape of (batch, *targetshape) and remove the view step in your forward

Thanks…
do you have any code link to perform the set of steps ?