Stagnant Loss and Output in Multi-Class classification

I have a multi-class classification problem but when I train it, the loss becomes stagnant after few epochs at 1.7918.
The output is:

[[-0.1240, -0.1240, -0.1240, -0.1240, -0.1240, -0.1240]]

for every class and doesn’t change a bit after that.
The model:

# Feature Encoder 
class Enc(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc1 = nn.Linear(14, 12)
        self.fc2 = nn.Linear(12, 10)
        self.fc3 = nn.Linear(10, 8)
        self.fc4 = nn.Linear(8, 6)
        
    
    def forward(self, x):
        fn = nn.ReLU()

        hidden = fn(self.fc2(fn(self.fc1(x))))
        out = self.fc4(fn(self.fc3(hidden)))
        return out

model = Enc().to(device)

The training loop:

criterion = nn.CrossEntropyLoss().to(device)
optimizer = torch.optim.SGD(model.parameters(), lr = 0.01, momentum = 0.09)

for epoch in range(10000):
    inputs, labels = data
    optimizer.zero_grad()
    outputs = model(inputs)
    loss = criterion(outputs, labels)
    loss.backward()
    optimizer.step()
    if epoch%1999 == 1:
        print(loss) 

What should I do?