Handling high pixel class imbalance in biomedical image segmentation

Hi,

This post is to understand how high class imbalance is handled in biomedical image segmentation and detection problems especially in problems like cell detection and where the annotations are at dot/pixel level. This leads to very high number of background (bg) pixels. The model should ideally detect cells along with non-cells (bg pixels).

While trying to perform a pixel-level classification, the high number of bg pixels would have a biased learning favoring only the bg pixels.

Weighted loss function is one way, what are the other ways?