LSTM: IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

Hello all, I am quite new to PyTorch so this might be quite a trivial question. I have some time-series data from 4 classes and using the CrossEntropyLoss(). I keep getting the error as mentioned in the topic. I did read a few other threads regarding this error but was not able to figure it out.
The model class is below including the train function

class pytorch_lstm(net.Module):
    def __init__(self, features, hidden_size, sequence_length):
        super(pytorch_lstm, self).__init__()
        self.loss = net.CrossEntropyLoss()
        self.criterion = self.loss
        self.features = features
        self.hidden_size = hidden_size
        self.seq_length = sequence_length
        self.lstm = net.LSTM(
            input_size=self.features,
            hidden_size=hidden_size,
            num_layers=313,
            batch_first=True
        )
        self.linear = net.Linear(self.hidden_size*self.seq_length, 4)

    def init_Hidden(self):
        hidden_state = torch.zeros(313, 1, self.hidden_size)
        cell_state = torch.zeros(313, 1, self.hidden_size)
        self.hidden = (hidden_state, cell_state)

    def forward(self, X):
        lstm_out, self.hidden = self.lstm(X, self.hidden)
        out = self.linear(lstm_out.view(-1))
        return out

    def train_model(self, model, dataloader, num_epochs):
        least_loss = 1

        optimizer = torch.optim.Adam(model.parameters())
        training_loss = []
        for i in range(num_epochs):
            optimizer.zero_grad()
            st = time.time()
            epoch_loss = 0
            for _, (x, y) in enumerate(dataloader):
                model.init_Hidden()
                x = x.float()
                y = y.float()
                #x = x.cuda()
                #y = y.cuda()
                print(y)
                output = model(x)
                print(output)
                loss = self.criterion(output, y)
                loss.backward()
                optimizer.step()

The data is being loaded in the following manner, where labels are one hot encoded


    train_data = []
    Data = []
    Labels = []
    folders = ['769', '770', '771', '772']
    for folder in folders:
        files = os.listdir(path + '/' + folder)
        os.chdir(path + '/' + folder)
        for file in files:
            data = numpy.load(file)
            data = numpy.transpose(data)
            #Data.append(data)
            #Labels.append((int(folder)))
            if folder =='769':
                label = numpy.array([1, 0 ,0, 0])
            elif folder=='770':
                label = numpy.array([0, 1, 0, 0])
            elif folder == '771':
                label = numpy.array([0, 0, 1, 0])
            elif folder == '772':
                label = numpy.array([0 ,0, 0, 1])
            train_data.append([data, label])

    train_loader = torch.utils.data.DataLoader(train_data, batch_size=1, shuffle=True)
    return train_loader

Can someone please tell where I am going wrong.
TIA

Could you post the stack trace which shows which line of code raised this error?

Besides this indexing error, you will get errors using your criterion, as nn.CrossentropyLoss expects the target to be a LongTensor containing class indices, while it seems you are trying to pass a one-hot encoded target.

                loss = self.criterion(output, y)

This is the line throwing the error.
So for making the labels would something like Long(number_represting_class) suffice ?

Fixed the issue with changing out = out.view(out.size(0), -1)