LSTM CNN Loss issue

Below is the LSTM layer i used for LSTM CNN
But i will have trouble calculating the loss due mistmach in target and input size.
Am i missing any step here

class lstm(nn.Module):
    def __init__(self,bs,num,  lstm1_input=1024,lstm_hidden=512):
        self.lstm = nn.LSTM(1024, 512 ,2,batch_first=True)# input elements,hidden state size
    def forward(self,input):
        print(input.size()) #see below
        print(x.size()) #see below
        x = x[:,-1,:]
        print(x.size()) #see below
        return x

torch.Size([16, 12, 512])
torch.Size([16, 512])

Unless your target size is the same as your lstm’s hidden dim, you need to apply a linear layer after the lstm to transform it to the right size and also get rid of the tanh activation coming out of the lstm.

nn.linear(512, TARGET_SIZE)

I also recommend removing the dropout if this is at the very end of your network.

i can very well use that… but batch size is not same as the target batch size…

Your sequence length seems to be 12 so you need to use every 12th other target in your target tensor, not the entire thing. Or you could make your input in the shape of (batch, seq, *inputshape) and output in the shape of (batch, *targetshape) and remove the view step in your forward

do you have any code link to perform the set of steps ?