CrossEntropyLoss value do not decrease

I’m now dealing with images with 0 and 3 pixel values.
Values of 0 and 1 are unbalance, and often have only a value of 0 in one image.

I’m trying 2label(background+desired) segmentation with UNet.
Data info as follow -
X : numpy_images (dtype:uint8)
Y : numpy_masks (dtype:uint8)

X_max --> 255, Y_max --> 3
X_shape --> (480, 512, 512), Y_shape --> (480, 512, 512)

I converted X,Y to PIL using x = x.fromarray(x)for applying augmentation and also X,Y were converted to tensor x = TF.to_tensor(x).float(), y = TF.to_tensor(y).long()
X_tesnsor_shape --> torch.Size([1, 1, 512, 512])
Y_tesnsor_shape --> torch.Size([1, 512, 512])
output_tensor_shape --> torch.Size([1, 2, 512, 512])

My training loop as follow:

criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.99)
exp_lr_scheduler = lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.1)


def fit(epoch,model,data_loader,phase='train',volatile=False):
    if phase == 'train':
        exp_lr_scheduler.step()
        model.train()
    if phase == 'valid':
        model.eval()
    running_loss = 0.0
    for batch_idx , (data,target) in enumerate(data_loader):
        inputs,target = data.cpu(),target.cpu()
        if is_cuda:
            inputs,target = data.cuda(),target.cuda()
        inputs , target = Variable(inputs),Variable(target)
        if phase == 'train':
            optimizer.zero_grad()

        output = model(inputs)
        loss = criterion(output,target)
        
        running_loss += loss.data.item()
        
        if phase == 'train':
            loss.backward()
            optimizer.step()
    
    loss = running_loss/len(data_loader.dataset)
    
    print('{} Loss: {:.4f}'.format(
                phase, loss))
    return loss

but, I got
`Epoch 0/4

train Loss: 0.1523
valid Loss: 0.1041

Epoch 1/4

train Loss: 0.0815
valid Loss: 0.0792

Epoch 2/4

train Loss: 0.0791
valid Loss: 0.0792

Epoch 3/4

train Loss: 0.0791
valid Loss: 0.0792

Epoch 4/4

train Loss: 0.0791
valid Loss: 0.0792
`
The loss value is no longer decreased even if epoch is increased.

Is the loss no longer decreased because the data is unbalanced?.

Your loss could have stagnated because your learning rate is too high, I suggest reducing your learning rate (use an LR scheduler to do it) and then continuing training.