Newbee needs help!!LSTM training code ...

print("Training......")
start_time=time.time()
for epoch in range(1000):
    net = net.train()
    out = net(batch_x)
    Loss = loss(out, batch_y)

    optimizer.zero_grad()
    Loss.backward()
    optimizer.step()

    # scheduler.step()
    scheduler.step(Loss)#lr_scheduler.ReduceLROnPlateau

    if epoch % 10 == 0:
        Loss_valid=0
        net = net.eval()
        with torch.no_grad():
            out_valid = net(batch_validx)
            Loss_valid = loss(out_valid, batch_validy)
        print('Epoch: {:4}, Loss: {:.5f} , Loss_valid: {:.5f} , lr: {:.5f} '.format(epoch, Loss.item(), Loss_valid.item(), optimizer.param_groups[0]["lr"]))


end_time=time.time()
print(f'Training took {end_time-start_time} seconds')

Training……
Epoch: 0, Loss: 1.00316 , Loss_valid: 1.87190 , lr: 0.10000
Epoch: 10, Loss: 1.00277 , Loss_valid: 0.84588 , lr: 0.09000
Epoch: 20, Loss: 0.99660 , Loss_valid: 0.81552 , lr: 0.09000
Epoch: 30, Loss: 0.98478 , Loss_valid: 0.79688 , lr: 0.08100
Epoch: 40, Loss: 0.94413 , Loss_valid: 0.81124 , lr: 0.08100
Epoch: 50, Loss: 0.96319 , Loss_valid: 0.82703 , lr: 0.08100
Epoch: 60, Loss: 0.97517 , Loss_valid: 0.83291 , lr: 0.07290
Epoch: 70, Loss: 0.95339 , Loss_valid: 0.81459 , lr: 0.05905
Epoch: 80, Loss: 0.95462 , Loss_valid: 0.82126 , lr: 0.05314
Epoch: 90, Loss: 0.95244 , Loss_valid: 0.81611 , lr: 0.04783
Epoch: 100, Loss: 0.91957 , Loss_valid: 0.82782 , lr: 0.04305
Epoch: 110, Loss: 0.88055 , Loss_valid: 0.84757 , lr: 0.04305
Epoch: 120, Loss: 0.88261 , Loss_valid: 0.82991 , lr: 0.04305
Epoch: 130, Loss: 0.84414 , Loss_valid: 0.83797 , lr: 0.03874
Epoch: 140, Loss: 0.82690 , Loss_valid: 0.84577 , lr: 0.03874
Epoch: 150, Loss: 0.82871 , Loss_valid: 0.82727 , lr: 0.03487
Epoch: 160, Loss: 0.81299 , Loss_valid: 0.83291 , lr: 0.03487
Epoch: 170, Loss: 0.80848 , Loss_valid: 0.81171 , lr: 0.03487
Epoch: 180, Loss: 0.82118 , Loss_valid: 0.82955 , lr: 0.03138
Epoch: 190, Loss: 0.84017 , Loss_valid: 0.84859 , lr: 0.02824
Epoch: 200, Loss: 0.81144 , Loss_valid: 0.85816 , lr: 0.02542
Epoch: 210, Loss: 0.79448 , Loss_valid: 0.85410 , lr: 0.02288
Epoch: 220, Loss: 0.77297 , Loss_valid: 0.89438 , lr: 0.02288
Epoch: 230, Loss: 0.76242 , Loss_valid: 0.87608 , lr: 0.02288
Epoch: 240, Loss: 0.75231 , Loss_valid: 0.89328 , lr: 0.02288
Epoch: 250, Loss: 0.73848 , Loss_valid: 0.89923 , lr: 0.02288
Epoch: 260, Loss: 0.72244 , Loss_valid: 0.92185 , lr: 0.02288
Epoch: 270, Loss: 0.71837 , Loss_valid: 0.89441 , lr: 0.02288
Epoch: 280, Loss: 0.70244 , Loss_valid: 0.91994 , lr: 0.02288
Epoch: 290, Loss: 0.68920 , Loss_valid: 0.91606 , lr: 0.02288
Epoch: 300, Loss: 0.68744 , Loss_valid: 0.94420 , lr: 0.02288
Epoch: 310, Loss: 0.68034 , Loss_valid: 0.92912 , lr: 0.02059
Epoch: 320, Loss: 0.66862 , Loss_valid: 0.97463 , lr: 0.02059
Epoch: 330, Loss: 0.66328 , Loss_valid: 0.96139 , lr: 0.02059
Epoch: 340, Loss: 0.66524 , Loss_valid: 0.92839 , lr: 0.02059
Epoch: 350, Loss: 0.64902 , Loss_valid: 0.97426 , lr: 0.01853
Epoch: 360, Loss: 0.63730 , Loss_valid: 0.97161 , lr: 0.01853
Epoch: 370, Loss: 0.63587 , Loss_valid: 0.96699 , lr: 0.01853
Epoch: 380, Loss: 0.63489 , Loss_valid: 0.94800 , lr: 0.01668
Epoch: 390, Loss: 0.62138 , Loss_valid: 0.98730 , lr: 0.01668
Epoch: 400, Loss: 0.61612 , Loss_valid: 1.00091 , lr: 0.01501
Epoch: 410, Loss: 0.60611 , Loss_valid: 1.00102 , lr: 0.01501
Epoch: 420, Loss: 0.60122 , Loss_valid: 1.01488 , lr: 0.01501
Epoch: 430, Loss: 0.59859 , Loss_valid: 1.03265 , lr: 0.01501
Epoch: 440, Loss: 0.60626 , Loss_valid: 0.98629 , lr: 0.01351
Epoch: 450, Loss: 0.59467 , Loss_valid: 1.02511 , lr: 0.01351
Epoch: 460, Loss: 0.58322 , Loss_valid: 1.04381 , lr: 0.01351
Epoch: 470, Loss: 0.57967 , Loss_valid: 1.05414 , lr: 0.01351
Epoch: 480, Loss: 0.59828 , Loss_valid: 1.01195 , lr: 0.01216
Epoch: 490, Loss: 0.57783 , Loss_valid: 1.04054 , lr: 0.01094
Epoch: 500, Loss: 0.56883 , Loss_valid: 1.04576 , lr: 0.01094
Epoch: 510, Loss: 0.56206 , Loss_valid: 1.06330 , lr: 0.01094
Epoch: 520, Loss: 0.55730 , Loss_valid: 1.08246 , lr: 0.01094
Epoch: 530, Loss: 0.55305 , Loss_valid: 1.11677 , lr: 0.01094
Epoch: 540, Loss: 0.55478 , Loss_valid: 1.12357 , lr: 0.00985
Epoch: 550, Loss: 0.54799 , Loss_valid: 1.12652 , lr: 0.00985
Epoch: 560, Loss: 0.54452 , Loss_valid: 1.13231 , lr: 0.00985
Epoch: 570, Loss: 0.53961 , Loss_valid: 1.14211 , lr: 0.00886
Epoch: 580, Loss: 0.53510 , Loss_valid: 1.15727 , lr: 0.00886
Epoch: 590, Loss: 0.53428 , Loss_valid: 1.16417 , lr: 0.00886
Epoch: 600, Loss: 0.53007 , Loss_valid: 1.17492 , lr: 0.00886
Epoch: 610, Loss: 0.52665 , Loss_valid: 1.19026 , lr: 0.00886
Epoch: 620, Loss: 0.52201 , Loss_valid: 1.19882 , lr: 0.00798
Epoch: 630, Loss: 0.51790 , Loss_valid: 1.19867 , lr: 0.00798
Epoch: 640, Loss: 0.51407 , Loss_valid: 1.23578 , lr: 0.00798
Epoch: 650, Loss: 0.51048 , Loss_valid: 1.24741 , lr: 0.00798
Epoch: 660, Loss: 0.50739 , Loss_valid: 1.24768 , lr: 0.00798
Epoch: 670, Loss: 0.50299 , Loss_valid: 1.25863 , lr: 0.00798
Epoch: 680, Loss: 0.50126 , Loss_valid: 1.28079 , lr: 0.00798
Epoch: 690, Loss: 0.50005 , Loss_valid: 1.28577 , lr: 0.00798
Epoch: 700, Loss: 0.49493 , Loss_valid: 1.30260 , lr: 0.00718
Epoch: 710, Loss: 0.49159 , Loss_valid: 1.33307 , lr: 0.00718
Epoch: 720, Loss: 0.48919 , Loss_valid: 1.31761 , lr: 0.00718
Epoch: 730, Loss: 0.48820 , Loss_valid: 1.35105 , lr: 0.00718
Epoch: 740, Loss: 0.48467 , Loss_valid: 1.36124 , lr: 0.00718
Epoch: 750, Loss: 0.48053 , Loss_valid: 1.36916 , lr: 0.00718
Epoch: 760, Loss: 0.47790 , Loss_valid: 1.40329 , lr: 0.00718
Epoch: 770, Loss: 0.47623 , Loss_valid: 1.39230 , lr: 0.00646
Epoch: 780, Loss: 0.47302 , Loss_valid: 1.42231 , lr: 0.00646
Epoch: 790, Loss: 0.47060 , Loss_valid: 1.44392 , lr: 0.00646
Epoch: 800, Loss: 0.46743 , Loss_valid: 1.45224 , lr: 0.00646
Epoch: 810, Loss: 0.46663 , Loss_valid: 1.45919 , lr: 0.00581
Epoch: 820, Loss: 0.46373 , Loss_valid: 1.46976 , lr: 0.00581
Epoch: 830, Loss: 0.46277 , Loss_valid: 1.47158 , lr: 0.00523
Epoch: 840, Loss: 0.46052 , Loss_valid: 1.49458 , lr: 0.00523
Epoch: 850, Loss: 0.45873 , Loss_valid: 1.49603 , lr: 0.00523
Epoch: 860, Loss: 0.45715 , Loss_valid: 1.50548 , lr: 0.00523
Epoch: 870, Loss: 0.45689 , Loss_valid: 1.51772 , lr: 0.00523
Epoch: 880, Loss: 0.45414 , Loss_valid: 1.51268 , lr: 0.00471
Epoch: 890, Loss: 0.45319 , Loss_valid: 1.51110 , lr: 0.00471
Epoch: 900, Loss: 0.44999 , Loss_valid: 1.54060 , lr: 0.00471
Epoch: 910, Loss: 0.44626 , Loss_valid: 1.56482 , lr: 0.00471
Epoch: 920, Loss: 0.44512 , Loss_valid: 1.58074 , lr: 0.00471
Epoch: 930, Loss: 0.44234 , Loss_valid: 1.59516 , lr: 0.00471
Epoch: 940, Loss: 0.44033 , Loss_valid: 1.59582 , lr: 0.00471
Epoch: 950, Loss: 0.43926 , Loss_valid: 1.60843 , lr: 0.00471
Epoch: 960, Loss: 0.44000 , Loss_valid: 1.61440 , lr: 0.00471
Epoch: 970, Loss: 0.43613 , Loss_valid: 1.60503 , lr: 0.00424
Epoch: 980, Loss: 0.43454 , Loss_valid: 1.62839 , lr: 0.00424
Epoch: 990, Loss: 0.43331 , Loss_valid: 1.63874 , lr: 0.00424
Training took 20.11829423904419 seconds

What’s wrong with the training code???

I’m not entirely sure what the issue is. At a glance, your training loop doesn’t look incorrect, logically. The amount of training time seems much too low (perhaps either because your model is not very complex or maybe the readout is off by an order of magnitude).

However, here are a few things:

  1. What you have called your epoch seems to actually be an “iteration”. An iteration is a pass through your batch, while an epoch is an entire pass through your data. Are you using a dataloader object to feed in your data to the model? Or is this your entire dataset being passed to the model at once? I would first recommend passing your data through in batchs (check out Pytorch’s dataset and dataloader objects and good practices on splitting and/or shuffling your data).
  2. Around epochs 100-200, your model appears to be overfitting. Your training loss is decreasing while your validation loss increases. I would look this up if you are unsure about the term. But essentially, your model is doing a good job of “learning” your training set to well, but not generalizing to your validation set.

These are just some general tips and pointers. But perhaps, if you are more specific on what the issue is, I or others could help. This is a good start. Good luck!

You might also need to post the code for your model.