Unrolling nn.LSTM

Hi,

I am currently implementing a DRQN network which works correctly, however I want to unroll the LSTM network for a specified amount of steps, how do I do this in pytorch? Could someone provide some insight?

I have followed this tutorial, but there is no mention of unrolling:
https://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html

class DRQNBody(nn.Module):
def __init__(self, in_channels=4):
    super(DRQNBody, self).__init__()
    self.feature_dim = 512
    self.rnn_input_dim = 7*7*64
    self.batch_size = -1
    self.conv1 = layer_init(nn.Conv2d(in_channels, 32, kernel_size=8, stride=4))
    self.conv2 = layer_init(nn.Conv2d(32, 64, kernel_size=4, stride=2))
    self.conv3 = layer_init(nn.Conv2d(64, 64, kernel_size=3, stride=1))
    self.lstm = nn.LSTM(self.rnn_input_dim, self.feature_dim , num_layers = 4)
    self.hidden = self.init_hidden()

def forward(self, x):
    y = F.relu(self.conv1(x))
    y = F.relu(self.conv2(y))
    y = F.relu(self.conv3(y))
    y = y.view(y.size(0), -1) # flattening
    y = y.view(1, self.batch_size, self.rnn_input_dim)   # Adding dimention 
    output, self.hidden = self.lstm(y, self.hidden)
    y = output
    y = torch.squeeze(y,1)
    return y

Use a for loop

for xt in 10:
            output, (hn,cn) = self.lstm(xt[:,None,:], (hn,cn))
1 Like

thanks for the reply, this solves my problem although I dont understand the meaning behind the indexing in xt[:,None,:]

Was having fun, it would depend on how you loaded the data.

1 Like

Ah okay, now I am unrolling in the forward pass now, how do I do it in the backward pass? And are both needed for DRQN?

PyTorch handles the backprop for you. Your computation graph is build dynamically and as a result in the graph the LSTM is repeated 10 times and you would automatically backprop through all the 10 layers. Sorry man, I don’t have much knowledge about reinforcement learning.

1 Like

Hi,

I get the feeling this doesnt unroll it alone, I think I need to somehow add the output as an input too?