How could I configure/optimize a loss function?

I am working with my CNN model right now and currently fine-tuning it. I choose nn.CrossEntropyLoss() as my loss function due to its strong ability to deal with multiple-class classification tasks. I read the loss function description and see the CrossEntropyLoss() has the following parameters: torch.nn.CrossEntropyLoss(weight=None, size_average=None, ignore_index=-100, reduce=None, reduction=‘mean’).

I was wondering that is there any way to optimize these parameters inside of nn.CrossEntropyLoss() function? Further, could I optimize any other loss functions?

The only argument which could potentially be optimized would be the weight argument, as the others are optional arguments to change the internal reduction.
However, I’m unsure how valuable it would be to optimize the weight, as e.g. a zero weight would create a “perfect” but potentially useless loss value of 0, so your model might be able to cheat through optimizing it. In any case, since I’m not familiar with your use case, it could still be valid if you make sure the aforementioned issue cannot happen.