
If the accuracy for each class of my model is [0.7, 0.8, 0.2], can I improve the performance of the last class by applying the weight as [1,1,2] in crossentropy ?

When I applied the weight as [1,1,2], I thought the loss would be different from that of ces1. But it gives the same value. Why is that?
ces1 = nn.CrossEntropyLoss(reduction='none')
class_weights = [1,1,2]
class_weights = torch.FloatTensor(class_weights).cuda()
ces2 = nn.CrossEntropyLoss(weight=class_weights, reduction='none')
Compute with ces1 : [0.3, 0.4, 0.4]
Compute with ces2 : [0.3, 0.4, 0.4]