torchvision transforms ToTensor converts the image to [0,1]. Rather than multiplying by 255 afterwards. Can I avoid this normalization in the below code snippet in some way?
transform = transforms.Compose([transforms.ToTensor()])
torchvision transforms ToTensor converts the image to [0,1]. Rather than multiplying by 255 afterwards. Can I avoid this normalization in the below code snippet in some way?
transform = transforms.Compose([transforms.ToTensor()])
You can do this easily without using ToTensor
. All you need is to define your own transform, like this:
class ToTensorWithoutScaling(object):
"""H x W x C -> C x H x W"""
def __call__(self, picture):
return torch.ByteTensor(np.array(picture)).permute(2, 0, 1)
transform = transforms.Compose([transforms.ToTensorWithoutScaling()])
Edit: Intermediate conversion to np.array
is needed before calling ByteTensor
constructor