Torchvision transformations wrong results


The title might be click bait but I wanted to be sure of this .
I have an input image and I want to transform it by normalizing it and transforming it into a tensor.
I tried what is implemented into torchvision and also a simple numpy modification ( see below).
Both transformation should give me the same results but in the end it does not and I don’t know why, any idea ?


means=[0.485, 0.456, 0.406]
stds=[0.229, 0.224, 0.225]
preprocessed_img = img_inp.copy()[: , :, ::-1]
for i in range(3):
	preprocessed_img[:, :, i] = preprocessed_img[:, :, i] - means[i]
	preprocessed_img[:, :, i] = preprocessed_img[:, :, i] / stds[i]
preprocessed_img =  np.ascontiguousarray(np.transpose(preprocessed_img, (2, 0, 1)))
preprocessed_img = torch.from_numpy(preprocessed_img)
inp = Variable(preprocessed_img.unsqueeze(0), requires_grad = True)


transform = transforms.Compose([
    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    inpu = Variable(transform(img_inp).unsqueeze(0), requires_grad = True)

Thank you in advance.


Torch use C x H x W. Your numpy transforms code assumes H x W x C

But the transpose is changing the order of channels
At the end I have a H W C but values inside at not the same.
And ToTensor uses HxWxC array . so … I don’t understand

Maybe you should also change the order of means and stds

I found the solution, just using pil and resizing it works