I am trying to benchmark some later work using some common heuristics for identifying clouds. For example, one of the common statistics is that the Red-Blue ratio will be > 0.7 for clouds.
I want to optimize this Red / Blue > X
value for X with the dataset I am working with, and then compute accuracy, IOU, etc. with ground truth masks. This is what I currently have:
class RBR_Model(nn.Module):
"""Custom PyTorch model for gradient optimization of RBR statistics"""
def __init__(self):
super().__init__() # inherit from parent class
self.RBR_threshold = torch.Tensor([0.7])
def forward(self, img):
print(img.shape)
R = img[:,:,:,0]
B = img[:,:,:,2]
RBR = R / B
RBR_mask = RBR > self.RBR_threshold
return RBR_mask
Apologies on the inexperience with pytorch.
TLDR: take image of input [H W C] and convert to [H W] with binary decision based on one value: the ratio of red and blue