For the code,
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])])
It first converts a PIL Image or numpy.ndarray (H x W x C) in the range [0, 255] to a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0]. Then performs normalization
image = (image - 0.5) / 0.5
Importance thing is that my image range is uint16 type, it means the range is bigger than 255. So, I think .ToTensor()
may not good one. How should I do to convert to range [0,1]? Is it image/max(image)
?