Error: block: [0,0,0], thread: [4,0,0] Assertion `t >= 0 && t < n_classes` failed

I am using an LSTM for multi-class many-to-one prediction (20 classes).
When trying to calculate the loss using CrossEnthropyLoss, I receive the following error:

Error: block: [0,0,0], thread: [4,0,0] Assertion `t >= 0 && t < n_classes` failed.

My model output has size [batch_size,number_of_classes] (number_of_classes=output_dim in the below LSTM) and my target has size [batch_size]. My model output partially contains negative values: Is this an issue?.
Below is my model:

class LSTM(Model):
    def __init__(self, input_dim, hidden_dim, layer_dim, output_dim, dropout = None):
        super(LSTM, self).__init__()

        self.input_dim = input_dim
        self.hidden_dim = hidden_dim
        self.layer_dim = layer_dim
        self.output_dim = output_dim
        self.dropout = dropout
        self.lstm = nn.LSTM(self.input_dim, self.hidden_dim, self.layer_dim, batch_first=True, dropout= self.dropout if self.dropout else None)

        self.fclstm = nn.Linear(self.hidden_dim, self.output_dim)

    
    def forward(self, x):
        out = x.float()

        h0 = torch.zeros(self.layer_dim, out.size(0), self.hidden_dim).requires_grad_().to(next(self.parameters()).device)
        c0 = torch.zeros(self.layer_dim, out.size(0), self.hidden_dim).requires_grad_().to(next(self.parameters()).device)   
        out, _ = self.lstm(out, (h0.detach(), c0.detach()))

        # get last time step's hidden state (many-to-one)
        out = self.fclstm(out[:, -1, :])
        return out

But even when adding a ReLU after the fclstm Linear layer in order to squeeze all ouput values between 0 and 1, I still get the same error.
Does anyone know what is causing the error?
Thanks!

Make sure that the target is between 0 and C-1 (inclusive), where C = number of classes.