Tensor (image) with negative values

Hello, I am using OpenCV capture to get frames and pass to PyTorch model.

currently i’m pre processing the frame in the following way:

data_transforms = transforms.Compose(
	[
		transforms.Resize(INPUT_SIZE),
		transforms.CenterCrop(INPUT_SIZE),
		transforms.ToTensor(),
		transforms.Normalize([0.485,0.456,0.406], [0.229, 0.224, 0.225])
	]
) 

def preprocess(image):
	image = PIL.Image.fromarray(image)                 
	image = data_transforms(image)
	image = image.float()
	image = image.unsqueeze(0) # Resnet50 model seems to only accepts 4d tensor
	
return image  

while (cap.isOpened()):
     ret, frame = cap.read()
     img = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
     image_data = preprocess(img).to(DEVICE)

But im getting negative values when printing image_data:

tensor([[[[ 0.1426,  0.1426,  0.1426,  ...,  0.0227,  0.0227,  0.0227],
          [ 0.1426,  0.1426,  0.1426,  ...,  0.0227,  0.0227,  0.0227],
          [ 0.1426,  0.1426,  0.1426,  ...,  0.0227,  0.0227,  0.0227],
          ...,
          [-2.1179, -2.1179, -2.1179,  ..., -2.1179, -2.1179, -2.1179],
          [-2.1179, -2.1179, -2.1179,  ..., -2.1179, -2.1179, -2.1179],
          [-2.1179, -2.1179, -2.1179,  ..., -2.1179, -2.1179, -2.1179]]

The values of image_data should not be in [0-1] values? I’m missing something?

Hi,

transforms.ToTensor() scales your input between [0, 1] but transforms.Normalize(mean, std) computes z-score for input images (normalize images) so it is expected to see negative values and even out of range [-1, 1] as you are using mean and std != 0.5.

Bests

2 Likes

Thank you @Nikronic!

So there is no problem? In the way im doing the inference will work as expected?

Actually, it has been experimentally shown that this helps the models to converge faster or achieve better result.
At inference time, you need to use same mean and std you have used for training, then the final outputs depend on your network may need unnormalization with same mean and std to get back to original scale.