Imbalance in training data

Hellow everyone,
I am a very new user trying to implement a deep learning UNet model for forest disturbance dectection using remote sensing data. As a new commer I am just trying to implement the generic model. I have a 4 band tif image. I have created a binary mask manually. Then I have created 128*128pixel patches. But the problem is that there is very little representation of the disturbed areas.
So my model is unable to identify the disturbed areas.
Please help me to rectify my issue.

you can limit the the amount of all black mask created. for eg make it only 40-50% of the mask with the disturbed area.
Also for the white part, try dividing mask into patches with some overlap, it will increase the dataset. There’s augmentation too, which can add variation in your disturbed area patches.