Simple Pixel-wise statistics for semantic Segmentation

I am trying to benchmark some later work using some common heuristics for identifying clouds. For example, one of the common statistics is that the Red-Blue ratio will be > 0.7 for clouds.

I want to optimize this Red / Blue > X value for X with the dataset I am working with, and then compute accuracy, IOU, etc. with ground truth masks. This is what I currently have:

class RBR_Model(nn.Module):
    """Custom PyTorch model for gradient optimization of RBR statistics"""

    def __init__(self):
        super().__init__()  # inherit from parent class
        self.RBR_threshold = torch.Tensor([0.7])

    def forward(self, img):
        print(img.shape)
        R = img[:,:,:,0]
        B = img[:,:,:,2]
        RBR = R / B
        RBR_mask = RBR > self.RBR_threshold
        return RBR_mask

Apologies on the inexperience with pytorch.

TLDR: take image of input [H W C] and convert to [H W] with binary decision based on one value: the ratio of red and blue

The self.RGB_threshold won’t be trainable as the comparison is not differentiable and will detach the tensor (besides that you would also need to create self.RGB_threhsold as an nn.Parameter).
Assuming you are comparing your feature against a single threshold, wouldn’t a ROC curve work to select it?

Thanks for your reply-- My goal is to compare FCNN and Barlow Twin architectures to segment clouds, But establishing a baseline with a few heuristics that have been done in literature would be interesting and could contribute to a third bayesian model was my thought. I felt that to compare these heuristics to fitted models, I should fit these naive heuristics as well.