The size of my tensor is:

(3L, 512L, 682L)

I first remove the batch dimension:

```
B, C, H, W = output_tensor.size()
output_tensor = output_tensor.view(C, H, W)
```

And then I try to run `transforms.Normalize`

on the tensor:

```
Normalize = transforms.Compose([transforms.Normalize(mean=[-0.40760392156, -0.45795686274, -0.48501960784], std=[1,1,1]) ]) # Subtract BGR
output_tensor = Normalize(output_tensor)
```

But this results in an error:

```
output_tensor = Normalize(output_tensor)
File "/usr/local/lib/python2.7/dist-packages/torchvision/transforms/transforms.py", line 42, in __call__
img = t(img)
File "/usr/local/lib/python2.7/dist-packages/torchvision/transforms/transforms.py", line 118, in __call__
return F.normalize(tensor, self.mean, self.std)
File "/usr/local/lib/python2.7/dist-packages/torchvision/transforms/functional.py", line 158, in normalize
raise TypeError('tensor is not a torch image.')
TypeError: tensor is not a torch image.
```

Edit, I forgot to take the tensor out of it’s Variable:

`output_tensor = Normalize(output_tensor.cpu().data)`