I have a train input of size `[3082086, 7, 30]`

, that is 3082086 frames, a window of 7 frames, each single frame contains 30 numbers.

The train labels is of size `[3082086, 1]`

, that is one label for every window of 7 frames.

I tried to stack them to create a train set as follow:

`trainset = torch.hstack(train_input, train_labels)`

…but I got an error `RuntimeError: Sizes of tensors must match except in dimension 1. Got 30 and 1 in dimension 2 (The offending index is 1)`

Any way I can create the dataset with the above dimensions?