LSTM multiple timeseries

hello, i will appreciate if you help me to solve my problem.
i want to use LSTM for below data structure. i want to predict next rating of review.
Business_1 = [1, 3, 4, 5, 2, 4, 1, 1, 4, 5, 2, 1]
Business_2 = [3, 3, 2, 2, 3, 4, 5, 2, 4, 1, 5, 4]
Business_3 = [3, 2, 1, 2, 1, 4, 5, 2, 1, 1, 2, 4]
.
.
.
Business_N = [3, 2, 1, 1, 1, 5, 3, 3, 3, 2, 1, 1]

my project is not univariate and it’s multivariate. so besides of rating we have other numerical features.

I want to predict next rating based on sequence length: L.
I want a model to use all business rating at the same time. I don’t want to use one by one ratings.

Business_1 = [1, 3, 4, 5, 2, 4, 1, 1, 4, 5, 2, 1]
convert to L=7
data, label
[1, 3, 4, 5, 2, 4, 1], [1]
[3, 4, 5, 2, 4, 1, 1], [4]
[4, 5, 2, 4, 1, 1, 4], [5]
[5, 2, 4, 1, 1, 4, 5], [2]
[2, 4, 1, 1, 4, 5, 2], [1]

the top data structure is 57Features

now i consider all business. so the data structure will be:
i think the shape of data structure like: N* 57Features

My question is:

How should I use this data structure in LSTM? i want model consider all business at the same time and predict future business rating based on business.

thanks for helping me in advance.

I believe a custom LSTM training is not necessary; a standard LSTM model should suffice. In essence, to predict the next rating for any business, simply input the last seven ratings into the model. In training, the below custom function can prepare your data for a business.

``````def create_sequences(data, seq_length=7):
sequences = []
labels = []
for i in range(len(data) - seq_length):
sequences.append(data[i:i+seq_length])
labels.append(data[i+seq_length])
return np.array(sequences), np.array(labels)
``````

thanks, but my question is about LSTM model. I want a model to consider all business in the same time.
for example, the model can consider business 1 data and train and in the next level consider business 2 and update weights and so on. but i want a model to train all business in the same time and consider all of them

Sounds like you are looking for a transformer or some version of self-attention. Have you read the paper “Attention is All You Need”(2017), yet?

The individual reviews can be treated as tokens and assigned an encoding vector of length 5 to allow the model to learn the relationships between review ratings. Here is a good tutorial on Transformers you might find useful:

https://pytorch.org/tutorials/beginner/transformer_tutorial.html