"TypeError: tensor is not a torch image.", when using: transforms.Normalize

The size of my tensor is:
(3L, 512L, 682L)

I first remove the batch dimension:

B, C, H, W = output_tensor.size()
output_tensor = output_tensor.view(C, H, W)

And then I try to run transforms.Normalize on the tensor:

Normalize = transforms.Compose([transforms.Normalize(mean=[-0.40760392156, -0.45795686274, -0.48501960784], std=[1,1,1]) ]) # Subtract BGR

output_tensor = Normalize(output_tensor)

But this results in an error:

    output_tensor = Normalize(output_tensor)
  File "/usr/local/lib/python2.7/dist-packages/torchvision/transforms/transforms.py", line 42, in __call__
    img = t(img)
  File "/usr/local/lib/python2.7/dist-packages/torchvision/transforms/transforms.py", line 118, in __call__
    return F.normalize(tensor, self.mean, self.std)
  File "/usr/local/lib/python2.7/dist-packages/torchvision/transforms/functional.py", line 158, in normalize
    raise TypeError('tensor is not a torch image.')
TypeError: tensor is not a torch image.

Edit, I forgot to take the tensor out of it’s Variable:

output_tensor = Normalize(output_tensor.cpu().data)

That function seems to only work on PIL images for Dataloader class when importing torchvision data. It doesn’t work on Tensors as far as I tried at this moment.

It might be faster to write your own normalize function.

1 Like

It was solved when i changed transforms in order as

transformed = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),
])

After transforming to tensor and then using normalize. As from the stack overflow answer.