I have added class weights in CrossEntropyLoss, but accuracy reduced

I have added class weights in CrossEntropyLoss, but I figured the prediction accuracy has reduced. Any idea? Did I use class weights inappropriately in the loss function?

class_weights = compute_class_weight(
    class_weight = 'balanced',
    classes = np.unique(train_loader.dataset.labels),
    y = np.ravel(train_loader.dataset.labels))

class_weights = [ 1., 83.36, 90.92]

loss_function = torch.nn.CrossEntropyLoss(weight=torch.Tensor(class_weights)).to(device)

You might be running into the accuracy paradox if your dataset is imbalanced. E.g. if class0 occurs in 99% of all samples, a simple return 0 function would already achieve a 99% accuracy but would be quite useless.