Image normalisation in preprocessing

Apologies if this has been discussed before, but so far I couldn’t find an answer that answers my “why”. So I have the following transformation:

transformation = transforms.Compose([
transforms.Resize((244,244)),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])

Pretty standard. However, I am confused about the negative values. I have been told that I shouldn’t work with negative values. Moreover, this is an example of an image that I get from this transform:
image

I know that it doesn’t have to make sense to the human eye, but I can’t imagine that the pixel values here are appropriate for a machine learning pipeline.

Could someone explain why this is actually fine to work with (if it is)?

Could you explain your concern and why your neural network won’t be able to use normalized values (containing positive and negative values)?

The normalization will subtract the mean and divide by the stddev to crease a standardized input, which contains the same “information” but in another range, which is usually helping the model training.