Input for multiple layered RNN

Hi guys,
I’m stuck, I really need your help.
I am trying to run this step:
rnn(X, self.hidden)

self.hidden size is [self.n_layers, self.batch_size, self.n_neurons], which is 1x100x15 in my case
X.shape is (100,16) initially, where 100 is batch size and 16 is number of features

To make it work I transform X with X = X.unsqueeze(dim=0) , so that I have shape of X now - (1,100,16).
First question-is it right thing to do? It works, but I don’t understand why…
The second question is that I want to make number of layers equal to 2,
so self.hidden size is now [2x100x15], but how should I transform X?

It gives me error now:
RuntimeError: Expected hidden size (1, 100, 15), got (2, 100, 15)

Thank you very uch in advance!!!

Based on your code it looks like you are passing a single sequence, is this correct?
The shape of your input should be [seq_len, batch_size, input_size, so currently seq_len=1.
The shape of your hidden state should be alright, as it’s expected to be [num_layers, batch_size, hidden_size].
You don’t have to change your input regarding the number of layers.
Here is a small example taken from the docs:

seq_len, batch_size, input_size = 1, 100, 15
num_layers, hidden_size = 2, 20

rnn = nn.RNN(
    input_size=input_size,
    hidden_size=hidden_size,
    num_layers=num_layers
)

x = torch.randn(seq_len, batch_size, input_size)
h0 = torch.randn(num_layers, batch_size, hidden_size)

output, h = rnn(x, h0)
1 Like

Thanks, it works now!

1 Like