Convolutional LSTM

Hi guys, I have been working on an implementation of a convolutional lstm.
I implemented first a convlstm cell and then a module that allows multiple layers.
Here’s the code:

It’d be nice if anybody could comment about the correctness of the implementation, or how can I improve it.
Thanks!

3 Likes

there is no need to implement LSTM by yourselt. In your forward() of CLSTM_cell, just input the output of conv to nn.LSTM. Like the code below:

        x = self.CNN(x)
        x = x.view(x.size()[0], 512, -1)
        # (batch, input_size, seq_len) -> (batch, seq_len, input_size)
        x = x.transpose(1, 2)
        # (batch, seq_len, input_size) -> (seq_len, batch, input_size)
        x = x.transpose(0, 1).contiguous()
        x, _ = self.LSTM1(x)

You should define your self.LSTM1 in your init, like

self.BiLSTM1 = nn.LSTM(input_size=nIn, hidden_size=nHidden, num_layers=1, dropout=0)

Also refer to the definition of nn.LSTM for how to use.

Hi,
I think this is different, I am trying to do something similar to what is presented in this paper.
Here the input is an image, and the states are also multichannel images. The input to hidden and hidden to hidden operation are convolutions instead of matrix vector multiplications…
In your code you just convert the output of a CNN to a vector and use the regular LSTM.

yeah, I am agree with you. Have you test that? Dose it works? I think LSTM may have too many parameters, GRU may works better?

Hi !
You have done a great work, I am also interested in CLSTM and want to do something using it.
I don’t know how it run in your machine, but I can’t run your code directly, so I rewrite some parts and it can run well with these changes, I changed the loop in CLSTM.forward to:

for idlayer in xrange(self.num_layers):
    hidden_c=hidden_state[idlayer]
output_inner=[]

for t in xrange(seq_len):
    hidden_c=self.cell_list[idlayer](current_input[:,t,:,:,:],hidden_c)
    output_inner.append(hidden_c[0].unsqueeze(1))

next_hidden.append(hidden_c)
current_input=torch.cat(output_inner,1)

Does these changes conflict with your original intension?

1 Like

Hi alan, could you tell me whats the error you are having with the original code?
I will check your changes to see if they do the same.

The most obvious error is that the features map size are not compatible,for example I can’t use torch.cat to concatenate input image and hidden states successfully.

Thanks @alan_ayu!
There was indeed an error in the input format (batch, seq_len,…). It happened because I used the right format in my own code, and I put a wrong one in GitHub. Could you please check again? Let me know if you still have any issues.

There is also this model:

The net now can work well !!!