Not able to input a signal of format (batch_size, signal) to Bidirectional LSTM

Hello,

I am trying to input a signal of format [batch_size, signal] to a Bidirectional LSTM. The signal consists of 660 values and the batch size is 50, hence the input format to the LSTM according to me is [50, 660] tensor.

I know that the input for an LSTM layer if “batch_first = True” is 3D input of format (batch_size, sequance_length, input_size) but I am not able to understand how to convert my given signal to an LSTM input format. Hence, I am getting this “RuntimeError” as “input must have 3 dimensions, got 2”

I have two queries?

  1. How should I give my signal as input to the network and how should I reshape the signal?
  2. Will the network what I made be able to run the signal and generate embedding?

This my network architecture,

sequance_length = 660
input_size = 660
hidden_dim = 32
num_epochs = 50
learning_rate = 0.001

margin = 0.2

class NetNet(nn.Module):
    
    def __init__(self, input_size, hidden_dim):
        
        super(NetNet, self).__init__()
        
        self.hidden_size = hidden_dim 
        
        self.lstm = nn.LSTM(input_size, self.hidden_size, bidirectional = True, batch_first = True)
        self.fc = nn.Linear(self.hidden_size * 2, self.hidden_size)
    
    def forward(self, x):
        
        h0 = torch.zeros(2, x.size(0), self.hidden_size).to(device)
        c0 = torch.zeros(2, x.size(0), self.hidden_size).to(device)
        
        print('h0 size: ' +str(h0.size()))
        print('c0 size: ' +str(c0.size()))
        print('x size: ' +str(x.size()))
        
        # Bidirectional LSTM
        out, _ = self.lstm(x, (h0, c0))
        # Applies avg pooling for temporal series
        out = out.sum(dim = 1) 
        out = torch.tanh(out)
        
        # First Fully Connected Layer
        out = self.fc(out[:, -1, :])
        out = torch.tanh(out)
        
        # Second Fully Connected Layer
        out = self.fc(out)
        out = torch.tanh(out)
        
        # L2-normalized output
        norm = torch.norm(out, 2, 1, keep_dim = True)
        output = out/norm
        
        return output 

Please guide me because I am stuck with this for quite some time now. Thank you!