Custom Dataset is changing shape of data

I am trying to implement a custom Dataset for sliding window over my data to be fed into an LSTM.

As you can see, my data is 1913,30490…I wish to grab 14 at a time (batch_size=14)…which should give me an object that is 14,30490

train_data_normalized.shape
(1913, 30490)

My custom Dataset below:

class MyDataset(Dataset):
    def __init__(self, data, window):
        self.data = data
        self.window = window

    def __getitem__(self, index):
        x = self.data[index:index+self.window]
        return x

    def __len__(self):
        return len(self.data) - self.window

my_dataset=MyDataset(train_data_normalized, 1)
my_loader = torch.utils.data.DataLoader(my_dataset, batch_size=14, shuffle = False)

d = next(iter(my_loader))
print(d.shape)
torch.Size([14, 1, 30490])

I am trying to understand why it is producing 14, 1, 30490 instead of 14, 30490

The end result I want to be able to get sequences like 1-14, then 2-15, 3-16, etc so they are of size 14 and advance forward one index at a time.

I see what I am doing wrong. I needed to request a window of 14 and a batch size in may loader of 1. It returns a 3d object, but actually that is what I want to feed the LSTM.

Another issue I have though is how can I make sure that it doesn’t return incomplete windows? Say for example I have a dataset that is 99x10. If I use a window size of 10, and a batch size on my loader of 20 it would return:
Batch 0. 20x10
Batch 1. 20x10
Batch 2. 20x10
Batch 3. 20x10
Batch 4. 9x10

Looks like I just needed to add drop_last=True to my loader.