Rescale Image Properly

Hi Everyone,
I am new to PyTorch and I am working on a Nuclei Segmentation model.
Below are the image and mask I am trying to using to train my model but my scale function cuts my mask image and gives me a black image. Hence my model doesn’t get trained properly.
Could you see if there’s something wrong with my rescale Function?

class Rescale(object):
“”“Rescale the image in a sample to a given size.
Args:
output_size (tuple or int): Desired output size. If tuple, output is
matched to output_size. If int, smaller of image edges is matched
to output_size keeping aspect ratio the same.
“””

def __init__(self, output_size, train=True):
    assert isinstance(output_size, (int, tuple))
    self.output_size = output_size
    self.train = train

def __call__(self, sample):
    if self.train:
        image, mask, img_id, height, width = sample['image'], sample['mask'], sample['img_id'], sample['height'],sample['width']

        if isinstance(self.output_size, int):
            new_h = new_w = self.output_size
        else:
            new_h, new_w = self.output_size

        new_h, new_w = int(new_h), int(new_w)

        # resize the image,
        # preserve_range means not normalize the image when resize
        img = transform.resize(image, (image.shape[0] // 8 + 6, image.shape[1] // 8 + 6), anti_aliasing=True,preserve_range=True)
        mask_rescaled = transform.resize(mask, (mask.shape[0] // 8 + 6, mask.shape[1] // 8 + 6), anti_aliasing=True, preserve_range=True, mode='constant')

        return {'image': img, 'mask': mask_rescaled, 'img_id': img_id, 'height':height, 'width':width}
    else:
        image, img_id, height,width = sample['image'], sample['img_id'], sample['height'],sample['width']
        if isinstance(self.output_size, int):
            new_h = new_w = self.output_size
        else:
            new_h, new_w = self.output_size

        new_h, new_w = int(new_h), int(new_w)

        # resize the image,
        # preserve_range means not normalize the image when resize
        img = transform.resize(image, (new_h, new_w), preserve_range=True, mode='constant')
        return {'image': img, 'height': height,'width':width, 'img_id':img_id}


This is the image I couldn’t upload both mask and the image at the same time.

Maybe when you are doing your resize to ~1/64th of the original image the masks are not surviving.

Can you post the code of your transform.resize?
Because it does not seem to be the standard torchvision.transforms.Resize by looking at the input arguments.

Thank You for replying.
The transform.resize is not a function I created, its a subpackage of skimage
from skimage import io, transform

from skimage.transform import resize

Can you post a reproducible code snippet here?

Because I tried using only skimage.transform.resize in this example but it worked as expected, so maybe your problem is elsewhere and not related to your use of the function.

I also added the process on how to do it with the torchvision.transforms.Resize function.

Thank you for the reply.
It’s okay
I asked my mentor and he said that reducing the image pixels from 2000 to 256 would not give me good results.
He told me to generate patches instead.
Could you give me some resources on how to create patches?

For sure!

Looking at the transforms available in the Torchvision documentation you have:

Then it is also possible to write your own transformation to suit your needs. This is a useful tutorial.