Are the class weights of the CE loss optimizble?

The cross entropy loss of PyTorch has a optional parameter “weights”, that multiplies the loss of the predicted class by a user defined value.
For a class that is more present in the data, we can put a lower weight on that class, and the model will more likely predict other, weighted, classes.

That is my current understanding. I am curious however, if this parameter is optimizble?
After all, it appear in the loss function and one could potentially calculate the loss w.r.t to the class weight and change them during training.

Does this make sense? Or will it simply result in predicting the majority class, as in the initiall setup without the weights?
Lastly, can this be implemented with a torch optimizer like Adam?

You could apply a manual weighting as seen in this post and optimize the weights during the training.
However, I would assume the model could try to move the weight values to zeros or large negative values to decrease the weighted loss, so you might need to e.g. normalize them or add a penalty.