Normalization of input data

Data is normalize between 0 to 1 and given to CNN. If the same data is normalize between -1 to 1 and given to same CNN, Is it produce same output?

If you pass data1 (normalized to [0, 1]) and data2 (normalized to [-1, 1]) to the same model, you won’t get the same outputs.
However, you could be successful in training models using both approaches.
This would of course still give you slightly different outputs, even if the accuracy of both models is comparable.


I am little confuse in normalizing data. Suppose i am restoring the image by the formula

out = (cnn_input_image - b) / cnn_output_image.

cnn_input_image and b = actually range are between 0 to 255 but normalize between -1 to 1
cnn_output_image = actually range is between 0 to 1 not between 0 to 255.

So numerator are in the range of 0 to 255 (which is normalized to -1 to 1) and denominator is in the range of 0 to 1.

The question is

  1. how to restore the image using cnn_output_image and which activation function is
    appropriate at last layer?
  2. Is the normalization require of cnn_output_image?
  3. Should it requires to take absolute value of denominator if it range -1 to 1

I’m not sure to understand the question completely.
Are cnn_input_image, b, and cnn_output_image all image tensors, i.e. di they have the same shape?
If so, wouldn’t you divide by zero, if cnn_output_image is normalized to [0, 1]?

cnn_input_image( normalize between -1 to 1) and cnn_output_image are image tensor and “b” is scalar(constant value like 0.8) . So operation is like

image_tensor (out=1,3,256,256) = image_tensor(cnn_input_image=1,3,256,256) - scalar (b=1,3,1,1)/ image_tensor(cnn_output_image = 1,3,256,256).

cnn_input_image is subtracted from some constant value and after that it will divide by output of CNN which is image tensor.

So the division might still divide by zero, since cnn_output_image might take arbitrary values.

Back to the questions:

  1. If you want to remove the operations, you could use (out * cnn_output_image) + b
  2. How are you normalizing the output? Usually you would normalize the input and let the model train.
  3. I would suggest to at least prevent the potential division by zero.