Hello There. I’m currently working on a LSTM Autoencoder. I have a big amount of samples. Each sample contains 120 features. For now, I’m creating sequences of length 1, batch_size is equal to 1 and everything is working fine. I first convert my data array to a list and then using the following function, I convert them to sequences of length 1:
def dataset(mydatalist): dataset = [torch.tensor(s).unsqueeze(1) for s in mydatalist] n_seq, seq_len, n_features = torch.stack(dataset).shape # n_seq,4,1 return dataset, seq_len, n_features
Then for training, I write the procedure “for seq_true in train_dataset:” which states the batch_size of 1. But as I have a large amount of samples, the training procedure is too slow. So I want to increase the batch_size in order to achieve better performance. Could anyone please help me with that? I know that it maybe a simple question but everything I try leads to shape-related errors in the LSTM network.
Also it would be very nice if you could also point out how to create sequences with length more than 1 alongside increasing the batch_size.
Many thanks in advance.