ConvLSTM just replace the linear function in normal LSTM with convolution.

I want to use autograd, is this possible to implement simply by using python with no cuda codes or c codes?

ConvLSTM:

ConvLSTM just replace the linear function in normal LSTM with convolution.

I want to use autograd, is this possible to implement simply by using python with no cuda codes or c codes?

ConvLSTM:

1 Like

I think you have to implement a new nn.Module subclass for the ConvLSTM and add the parameters corresponding to the W and b parameters to the module parameter pool. Then it is just a matter of implementing the math with the available pytorch operationsâ€¦

In order to make it easier, maybe it would be better to implement a ConvLSTMCell first that just produces one time step output and then the full class ConvLSTM that deals with the loop and does the call to ConvLSTMCell.

I am new to PyTorch, thanks for reply.

This seems the definition of RNNCell, Suppose I write this:

`class ConvLSTMCell(nn.Module):`

do you mean that in this ConvLSTMCell module, I can use nn.Linear and nn.conv2d , and itâ€™s with autorgrad function?

Yes, you could use those operationsâ€¦ However I donâ€™t know if nn.Linear would be adequate to construct your desired module. Youâ€™ll have to define a few nn.Parameters corresponding to the W matrices and b vectors and then use nn.conv2d together with torch.bmm in order to achieve those equations

Hi, I was trying to implement conv_lstm, though not exactly with the equations you posted.

Hereâ€™s a code for it; maybe somebody can check it and also let me know if it looks good or if some changes should be made:

https://github.com/rogertrullo/pytorch_convlstm

2 Likes

I think you should either use Variables with requires_grad = True or a Parameter class for the weights. But I might be wrongâ€¦

And that code is just for a cell, right? You should have to do a for loop somewhere!

Hi the code is just for a cell, I updated the gist with how the sequence loop could be used.

About the requires_grad, I use the nn.Conv2d module, isnâ€™t that automatic when using nn modules?

Could anybody confirm?

i can confirm that if you use nn Modules `requires_grad`

is automatically true on the parameters.

how to combine **BatchNormalization with ConvLSTM** than i have face difficult

- Where apply BN concept in ConvLSTM

In any one apply this concept