Hi,
I’d like to build lstm autoencoder for roads. I read this article, and it is quite clear to build encoder-decoder for one road. However, my questions are (1) is it possible to pass all roads data to the encoder?. I don’t want to do one road at the time, but do it for each road for each batch while training the model. I am thinking of reshaping data to (1) include road id, or (2) combine all the times regardless of road id, for the example below (feature_1, 01:00:00 → road 1 (0.23), road 2 (0,13))?.
I have data that looks like the following. For each road (id), we have data for each 2 minutes, for two features.
|road_id|timestamp|feature_1|feature_2
| --- | --- | --- | --- | --- |
|1|2020-12-20 01:00:00|0.23|0.1
|1|2020-12-20 01:02:00|0.3|0.12
|1|2020-12-20 01:04:00|0.3|0.12
|2|2020-12-20 01:00:00|0.13|0.2
|2|2020-12-20 01:02:00|0.2|0.4
|2|2020-12-20 01:04:00|0.3|0.13
This is the encoder for one road, based on the link shared earlier.
class LSTMEncoder(nn.Module):
def __init__(self, input_size,hidden_size,num_layers):
super(LSTMEncoder,self).__init__()
self.n_features = n_features
self.hidden_size = hidden_size
self.n_layers = n_layers
self.device = device
self.encoder = nn.LSTM(
self.n_features,
self.hidden_size,
batch_first=True,
num_layers=self.n_layers,
bias=True,
)
def forward(self, x):
enc_hidden = self.init_hidden_state(x.shape[0])
_, enc_hidden = self.encoder(x.float(), enc_hidden)
return enc_hidden