How to calculate the weights for the CrossEntropy loss function?

Instead of the cifar100. I am using just 4 classes (hair color) of the CelebAHQ dataset. The data is unbalanced and I need to change the loss function by adding weights. I am using an existing framework:

(Source: pytorch-cifar100/train.py at 2149cb57f517c6e5fa7262f958652227225d125b · weiaicunzai/pytorch-cifar100 · GitHub)

Replacing in train.py the line:

loss_function = nn.CrossEntropyLoss()

with

w = np.array([1/black/all_samples, 1/blond/all_samples, 1/brown/all_samples, 1/gray/all_samples])
cw = torch.tensor([w[0], w[1], w[2], w[3]], dtype=torch.float32).cuda() # class weights for 0, 1, 3, 4
loss_function = nn.CrossEntropyLoss(weight=cw)

The output is not that good… How can I improve the weights? Should I use the keywords reduce? ( # loss_function = nn.CrossEntropyLoss(ignore_index=255, weight=cw, reduction='none')

I am not sure what kind of answer you are looking for. Could you specify a bit more, what you want to do.

To address the error message you are getting:
The error tells you, that you have 100 classes (CIFAR100) but only give weights for 4 classes and that you need to give weights for all 100 classes if you want to use the weight parameter of CELoss.

Yes, I did not adapt the number of classes.

I saw you completely changed your question by editing it.
I almost missed that, it might have been better to post your new question below.

As it is now you are not doing anything inherently wrong. As long as the all_samples value is correct.
Take a look at this.

What reduction='none' changes compared to reduction='mean' (which is default), is that instead of getting the weighted mean of the output you will now get the output without any reduction applied, meaning you will get an output of the same size as the target size.
But keep in mind that, if you want to use the weight parameter of CELoss and set reduction to none, you would have to manually normalize your loss. See here.