LSTM Produces Random Predictions

I have trained an LSTM in PyTorch on financial data where a series of 14 values predicts the 15th. I split the data into Train, Test, and Validation sets. I trained the model until the loss stabilized. Everything looked good to me when using the model to predict on the Validation data.

When I was writing up my research to explain to my manager, I happened to get different predicted values each time I ran the model (prediction only) on the same input values. This is not what I expected so I read some literature but was not able to explain my results. Intuitively, my results indicate there is some random variable, node, gate, that is influences the prediction, but I cannot figure out where this is or if/how this can be configured.

Here is my model definition:

class TimeSeriesNNModel(nn.Module):
    def __init__(self):
        super(TimeSeriesNNModel, self).__init__()
        self.lstm1 = nn.LSTM(input_size=14, hidden_size=50, num_layers=1)
        self.lstm2 = nn.LSTM(input_size=50, hidden_size=25, num_layers=1)
        self.linear = nn.Linear(in_features=25, out_features=1)

        self.h_t1 = None
        self.c_t1 = None
        self.h_t2 = None
        self.c_t2 = None

    def initialize_model(self):
        self.h_t1 = torch.rand(1, 1, 50, dtype=torch.double)
        self.c_t1 = torch.rand(1, 1, 50, dtype=torch.double)
        self.h_t2 = torch.rand(1, 1, 25, dtype=torch.double)
        self.c_t2 = torch.rand(1, 1, 25, dtype=torch.double)

    def forward(self, input_data, future=0):
        outputs = []
        self.initialize_model()

        output = None
        for i, input_t in enumerate(input_data.chunk(input_data.size(1), dim=1)):
            self.h_t1, self.c_t1 = self.lstm1(input_t, (self.h_t1, self.c_t1))
            self.h_t2, self.c_t2 = self.lstm2(self.h_t1, (self.h_t2, self.c_t2))
            output = self.linear(self.h_t2)
            outputs += [output]

        outputs = torch.stack(outputs, 1).squeeze(2)
        return outputs

If anyone can point out what is wrong with my model or my understanding, I’d be really grateful.

RNN do give the same result once its parameters is given.
I ran into the same problem yesterday.

The seemingly random behavior may be caused by:

  1. dropout: When training, dropout layer randomly choose some units to block, which may cause random behavior. see https://pytorch.org/docs/stable/nn.html#dropout-layers for more information. In the predicting phase, you should set model.training to false by model.eval() to distribute your weight over all layer units.

  2. random initialization: You should not only reconstruct your network, but also load your parameters.(My case)

This might be useful: https://pytorch.org/docs/stable/notes/randomness.html
It talks about reproducibility.