The title might be click bait but I wanted to be sure of this .
I have an input image and I want to transform it by normalizing it and transforming it into a tensor.
I tried what is implemented into torchvision and also a simple numpy modification ( see below).
Both transformation should give me the same results but in the end it does not and I don’t know why, any idea ?
means=[0.485, 0.456, 0.406] stds=[0.229, 0.224, 0.225] preprocessed_img = img_inp.copy()[: , :, ::-1] for i in range(3): preprocessed_img[:, :, i] = preprocessed_img[:, :, i] - means[i] preprocessed_img[:, :, i] = preprocessed_img[:, :, i] / stds[i]
preprocessed_img = np.ascontiguousarray(np.transpose(preprocessed_img, (2, 0, 1))) preprocessed_img = torch.from_numpy(preprocessed_img) inp = Variable(preprocessed_img.unsqueeze(0), requires_grad = True)
transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) inpu = Variable(transform(img_inp).unsqueeze(0), requires_grad = True)
Thank you in advance.