LSTM with other time series as features?

I am trying to build an LSTM model that will predict future values of a time series, using past values as well as other features (these other features being the key).

For example, I have a time series Y:

Y = [1, 2, 3, 1, 2, 3]

that also has a bunch of features associated with each time point:

X1 = [5,5,4,3,3,3] X2 = [2,2,1,2,2,2]

I’m having the damnedest time figuring out how to extrapolate the LSTM example with Sin wave function prediction with my scenario, where the input layer should contain cells for both past values and their associated features.

Does anybody have a link to an example of time series prediction using both autoregressive AND other features in PyTorch?

Hey I’m facing a similar problem right now, did you find any way of solving this issue? i.e. combining previous predictions with current input features?

The simplest approach would be to use torch.cat them together along the feature dimension (dim=2). Have a linear layer with the appropriate number of outputs towards the top of the model. Then on prediction, you can torch.cat the outputs of the model with the next timestep’s extrinsic features.

Best regards

Thomas

This is an older example from Pytorch 3.0 but it might help…

So, torch.cat all your timeseries into a tensor (batch, time, features) where feature has dim of 2 in your case, then…

class SimpleLSTM(nn.Module):
    def __init__(self, input_dims, sequence_length, cell_size, output_features=1):
        super(SimpleLSTM, self).__init__()
        self.input_dims = input_dims
        self.sequence_length = sequence_length
        self.cell_size = cell_size
        self.lstm = nn.LSTMCell(input_dims, cell_size)
        self.to_output = nn.Linear(cell_size, output_features)

    def forward(self, input):

        h_t, c_t = self.init_hidden(input.size(0))

        outputs = []

        for input_t in torch.chunk(input, self.sequence_length, dim=2):
            h_t, c_t = self.lstm(input_t.squeeze(2), (h_t, c_t))
            outputs.append(self.to_output(h_t))

        return torch.stack(outputs, dim=2)

    def init_hidden(self, batch_size):
        hidden = Variable(next(self.parameters()).data.new(batch_size, self.cell_size), requires_grad=False)
        cell = Variable(next(self.parameters()).data.new(batch_size, self.cell_size), requires_grad=False)
        return hidden.zero_(), cell.zero_()

the self.to_output set to nn.Linear(2,1) should reduce your features down to 1 number…

the rest of my lstm code at https://github.com/DuaneNielsen/DualAttentionSeq2Seq