I have five classes + 1 background for my semantic segmentation. My mask is like the one below:
It has a ton of background and classes are rarely seen on both image.
I’ve seen codes using CrossEntropyLoss with designated weights but usually they are using available datasets such as CityScapes so they have the weights on them.
How do I derive CrossEntropyLoss weights for my dataset?
You can do per batch as well, but that would result in a very noisy training process, since the class priorities for the network might change per batch.