Time-series prediction day-by-day with Conv1D

Hello
I developed a standard Conv1D model in Pytorch to predict time series with classification (4 classes).
I gathered a train set (5000 data) and a test set (1000 data). The model predicts daily data by batches and is quite efficient.

As the results were satisfactory, I then moved to the next step :

  1. I trained my model
  2. I saved the model
  3. I used the trained model on daily new data (bringing data for prediction day-by-day, instead of by batch size).
    The results were very deceptive (if not catastrophic).

Therefore, I checked what happen and tried to save my trained model, to clean the GPU cache, to re-load the model and to apply it to the test-set.

Herewith the original code :

    train_dl, test_dl = get_data_loaderRN(X_train, y_train, X_tests, y_tests, batch_size) 
    model = Conv_1D(input_shape, nb_classes, num_features, seq_length, batch_size, iter_model, iter_pre, dropout)
    model = model.double()
    model.cuda()
    criterion = nn.CrossEntropyLoss()  
    optimizer = torch.optim.Adam(model.parameters(), lr= learning)        
    y_predtrain, running_loss = train_Conv_1D(model, criterion, optimizer, epochs, learning, verbose, train_dl)
    acc_train = accuracy_score(y_train, y_predtrain)
    y_predsubt, y_pred_proba, running_loss = eval_model_a(model, nb_classes, criterion, test_dl)
    acc_test = accuracy_score(y_tests, y_predsubt)

And the code with save and re-load model :

    train_dl, test_dl = get_data_loaderRN(X_train, y_train, X_tests, y_tests, batch_size) 
    model = Conv_1D(input_shape, nb_classes, num_features, seq_length, batch_size, iter_model, iter_pre, dropout)
    model = model.double()
    model.cuda()
    criterion = nn.CrossEntropyLoss()  
    optimizer = torch.optim.Adam(model.parameters(), lr= learning)        
    y_predtrain, running_loss = train_Conv_1D(model, criterion, optimizer, epochs, learning, verbose, train_dl)
    acc_train = accuracy_score(y_train, y_predtrain)
    ## Save model and clean GPU cache + re-seed
    torch.save(model, model_path)
    torch.cuda.empty_cache()
    torch.manual_seed(my_seed)
    torch.cuda.manual_seed(my_seed)
    torch.backends.cudnn.deterministic = True
    torch.backends.cudnn.benchmark = False
    ## Reload model and use it to predict
    model = torch.load(model_path)
    model.to(device)
    y_predsubt, y_pred_proba, running_loss = eval_model_a(model, nb_classes, criterion, test_dl)
    acc_test = accuracy_score(y_tests, y_predsubt)

The Save / Clean / Load steps are there to simulate what happened with the model used for daily prediction… and the results are far from those with the original model.

How could I address this problem ?
Thanks in advance for your support.
Best,

NB1. The output is double “y_predsubt” recors the predicted class and “y_pred_proba” records the likelihood associated by the model to each of the 4 classes.

NB2. I did the same with a LSTM model and get the same issue :frowning:

If I understand the issue correctly, you are seeing a performance drop after loading a saved model and evaluating it on the same dataset?
If so, are you calling model.eval() while the evaluation is performed? If that’s the case, could you pass a static input to the model before saving and after loading (e.g. torch.ones) and compare the outputs?

1 Like

Hello ptrblck,
Thanks so much.
That is exactly my problem. The model does not predict the same results if :
A. I divide my train and test sets and train + model.eval() it by batches
B. I divide my train and test sets and train by batches and create a loop to test it with model.eval() day-by-day
C. I divide my train and test sets and train by batches, save it, re-load it and test it with model.eval()

BUT… I didn’t use model.eval() whose utility I discovered thanks to you, hence the instability.

I also faced another source of instability, even with model.eval(), and that comes from the dropout in LSTM. Removing it made the results stable, but less appealing.

The next task is to get back the quality of results I had before introducing model.eval() while keeping the reproducibility even in daily predictions.
THX again !