What should we do for image range uint16 before using transforms.Normalize?

For the code,

transform = transforms.Compose(
             transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])])

It first converts a PIL Image or numpy.ndarray (H x W x C) in the range [0, 255] to a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0]. Then performs normalization

image = (image - 0.5) / 0.5

Importance thing is that my image range is uint16 type, it means the range is bigger than 255. So, I think .ToTensor() may not good one. How should I do to convert to range [0,1]? Is it image/max(image)?

ToTensor might not normalize your image, if it detects an image in mode I;16 as seen in this line of code and this.
Are you using the whole range of uint16? If so you could normalize your image manually using the max value (65535).

1 Like