Very slow training

I have added an LSTM layer after a convolution in the VGG-16 model. Overtime, the model learns just fine. However, after adding just one LSTM layer, which consists of 32 LSTM cells, the process of training and evaluating takes about 10x longer.

I added the LSTM layer to a VGG framework as follows

def make_layers(cfg, batch_norm=False):
   # print("Making layers!")
    layers = []
    in_channels = 3
    count=0
    for v in cfg:
        count+=1
        if v == 'M':
            layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
        else:
            conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)
            if batch_norm:
                layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
            else:
                layers += [conv2d, nn.ReLU(inplace=True)]
                in_channels=v
        if count==5:
            rlstm =RLSTM(v)
            rlstm=rlstm.cuda()
            layers+=[rlstm]

Is this a common issue? The LSTM layer I added is very similar to RowLSTM, from Google’s Pixel RNN paper. Do LSTM layers just take long to train in general?