Hello all!
Currently doing some image deblurring and wanted to find out what is wrong.
Using this code:

Y = np.square(np.subtract(sharpimg,img)).mean()

I have three images for reference:

A original sharp image

A blurred version of the image

A deblurred image from (2) with a trained model.

By plotting the images, it is evident that (3) is clearer than (2)
However, calculating the Mean Square Error is a big problem. The MSE between (1) and (2) is 20+, while my MSE between (3) and (1) is 16000+
Plotting my tensors out also showed a big difference in values.
Does anyone know why, and how do I potentially fix this problem?
Thank you!

The most likely explanation is that the input and output images
of your deblurring network have different normalizations. (This is
certainly possible; your network outputs whatever you train it to
output, so if you train it to output an image whose normalization
differs from that of the input, it will.)

For example, your network might take in an unnormalized grayscale
image whose pixel values range from 0 to 255, normalize it internally,
and then output a normalized, deblurred image whose pixel values
range from -1.0 to 1.0.

(I note that your reported mean-squared-error is close to (255 / 2)**2.)

Print out the min(), max(), and mean() of your three images (original,
blurred, and deblurred) to see if inconsistent normalization explains
your issue.

If this is the cause, you might consider retraining your network to produce
a deblurred image with the desired normalization (or, alternatively, accept
an input image with the desired normalization).

But, as a work-around, you could try just normalizing your original and
deblurred images consistently before computing the mean-squared-error
between them.

Hi Frank!
Thanks for the suggestion! Normalizing seems to do the trick, MSE seems reasonable now. Unfortunately, the value of my MSE between (3) and (1) is a bit higher than between (1) and (2). Guess my model is not performing as well Thanks for the reply though!