Hi,

I have a LSTM-CNN model to train my timeseries data.

My training loss does is not decreasing much, I also tried increasing the size of model but still the train loss does not decrease.

Already done scaling and resampling.

class LSTMNet(nn.Module):

`def __init__(self, size, input_shape): super(LSTMNet, self).__init__() self.lstm = nn.LSTM(input_size=input_shape[-1], hidden_size=size, batch_first=True) self.conv1 = nn.Conv1d(input_shape[-1], size, kernel_size=8, padding=4) self.bn1 = nn.BatchNorm1d(size) self.conv2 = nn.Conv1d(size, size * 2, kernel_size=5, padding=2) self.bn2 = nn.BatchNorm1d(size * 2) self.conv3 = nn.Conv1d(size * 2, size, kernel_size=3, padding=1) self.bn3 = nn.BatchNorm1d(size) self.pooling = nn.AdaptiveAvgPool1d(1) self.fc = nn.Linear(size * 2, 1) def forward(self, x): x_lstm, _ = self.lstm(x) x_conv = self.conv1(x.permute(0, 2, 1)) x_conv = self.bn1(x_conv) x_conv = F.relu(x_conv) x_conv = self.conv2(x_conv) x_conv = self.bn2(x_conv) x_conv = F.relu(x_conv) x_conv = self.conv3(x_conv) x_conv = self.bn3(x_conv) x_conv = F.relu(x_conv) x_conv = self.pooling(x_conv) x = torch.cat((x_lstm[:, -1, :], x_conv.squeeze()), dim=1) x = self.fc(x) x = torch.sigmoid(x) return x`

I have tried values of size from [8,16,32,64]. In all cases the loss is almost same for all values of size.

Can someone please let me know how can I improve the training loss.

Below are loss values for size = 8.

Epoch 2/20

15355/15355 [==============================] - 274s 18ms/step - loss: 0.5384 - val_loss: 0.5737

Epoch 3/20

15355/15355 [==============================] - 274s 18ms/step - loss: 0.5363 - val_loss: 0.5407

Epoch 4/20

15355/15355 [==============================] - 270s 18ms/step - loss: 0.5351 - val_loss: 0.5592

Epoch 5/20

15355/15355 [==============================] - 278s 18ms/step - loss: 0.5343 - val_loss: 0.5519

Epoch 6/20

15355/15355 [==============================] - 291s 19ms/step - loss: 0.5335 - val_loss: 0.5540

Epoch 7/20

15355/15355 [==============================] - 382s 25ms/step - loss: 0.5331 - val_loss: 0.5734

Epoch 8/20

15355/15355 [==============================] - 479s 31ms/step - loss: 0.5327 - val_loss: 0.5495

Epoch 9/20

15355/15355 [==============================] - 432s 28ms/step - loss: 0.5323 - val_loss: 0.5369

Epoch 10/20

15355/15355 [==============================] - 234s 15ms/step - loss: 0.5319 - val_loss: 0.5354

Epoch 11/20

15355/15355 [==============================] - 245s 16ms/step - loss: 0.5316 - val_loss: 0.5340

Epoch 12/20

15355/15355 [==============================] - 276s 18ms/step - loss: 0.5313 - val_loss: 0.5501

Epoch 13/20

15355/15355 [==============================] - 293s 19ms/step - loss: 0.5311 - val_loss: 0.5364

Epoch 14/20

15355/15355 [==============================] - 287s 19ms/step - loss: 0.5308 - val_loss: 0.5518

Epoch 15/20

15355/15355 [==============================] - 266s 17ms/step - loss: 0.5306 - val_loss: 0.5488

Epoch 16/20

15355/15355 [==============================] - 281s 18ms/step - loss: 0.5304 - val_loss: 0.5515

Epoch 17/20

15355/15355 [==============================] - 261s 17ms/step - loss: 0.5302 - val_loss: 0.5446

Epoch 18/20

15355/15355 [==============================] - 344s 22ms/step - loss: 0.5301 - val_loss: 0.5375

Epoch 19/20

15355/15355 [==============================] - 267s 17ms/step - loss: 0.5299 - val_loss: 0.5204

Epoch 20/20

15355/15355 [==============================] - 256s 17ms/step - loss: 0.5297 - val_loss: 0.5351