How to handle overlapping segmentation mask in v2 transformations

Segmentation masks are provided as polygons from which i am generating Binary masks for each class as different channel.
For eg. I have 13 classes so my mask contain 13 channels with each channel as a class containing Binary mask.
Now I want to pass this through image transformations. I tried using skimage transformation wrote a custom one, but due to scaling down it is changing binary mask from 0-1 to <1 in floats.
I tried to use v2.transformations but wrapping 13 channel array/tensor with Mask Datapoint is making 12GB RAM out of memory.
I don’t want to store masks in RGB format because some of the masks are overlapping and it is causing color change which will in turn lead to incorrect label.
Any insights how I can approach this problem?

Skimage based resizing for image and mask which is leading to change in Binary mask values due to interpolation.

class Rescale(object):
    """Rescale the image in a sample to a given size.

        output_size (tuple or int): Desired output size. If tuple, output is
            matched to output_size. If int, smaller of image edges is matched
            to output_size keeping aspect ratio the same.

    def __init__(self, output_size):
        assert isinstance(output_size, (int, tuple))
        self.output_size = output_size

    def __call__(self, sample):
        image, landmarks = sample["img"],sample["mask"]

        h, w = image.shape[:2]
        if isinstance(self.output_size, int):
            if h > w:
                new_h, new_w = self.output_size * h / w, self.output_size
                new_h, new_w = self.output_size, self.output_size * w / h
            new_h, new_w = self.output_size

        new_h, new_w = int(new_h), int(new_w)

        img = transform.resize(image, (new_h, new_w),interpolation=Image.NEAREST)

        landmarks = transform.resize(landmarks,(new_h,new_w),interpolation=Image.NEAREST)

        return {'img': img, 'mask': landmarks}

This is code in my dataset class where I am wrapping mask in Mask Datapoint.

        mask = create_channel_masks((1024,1024,13),self.annotation_json[self.files[index]]["polygons"],self.annotation_json[self.files[index]]["syms"])
        # sample = {"img":img,"mask":mask}
        img, mask = self.transform(img,tv_tensors.Mask(mask,dtype=torch.uint8))

This is where I am generating multichannel masks

def create_channel_masks(image_shape, polygons, syms,disease_color_map):
    mask = np.full(image_shape,0)

    for idx, (polygon, sym) in enumerate(zip(polygons, syms)):
      ch = disease_color_map[sym][0] - 1
      mask[:,:,ch] = mask[:,:,ch] | get_mask(polygon)

    return mask

def get_mask(points):
    mask = np.full((1024,1024),False)
    if len(points) ==0:
        return mask
    img ='L', (1024,1024), 0)
    poly = [(x,y) for x,y in points]
    ImageDraw.Draw(img).polygon(poly, outline=1, fill=1)
    curr_mask = np.array(img)

    return curr_mask

Image containing overlapping colored masks which I want to avoid so I created Multichannel Binary Masks(You can see white color is overlapped)