Visuals not correlating to values in Project

My current project involves deblurring images using CNNs.
Upon plotting the 3 images, which are:

  1. Original sharp image
  2. Blurred image using OpenCV
  3. Deblured image of (2) using the model I have trained.

Upon plotting these 3, I find out that (3) is a bit clearer than (2), which kinds of achieving what I am doing here.
But calculating the MSE/SIM/PSNR values shows that (2) is better than (3) in terms of quality. I have normalized the values of all 3 images and this is what i get:

Degraded Image:
PSNR: 25.266282651000232
MSE: 580.1941565688776
SSIM: 0.8825525369859532

Reconstructed Image:
PSNR: 24.81144082688032
MSE: 644.2546071027518
SSIM: 0.8196236328699172

I was wondering what are the possible reasons why this will happen despite the reconstructed image looking better than the degraded image. Thanks!

Hi Hong!

How did you train your model?

For example, you might train a model to maximize some sort of “sharpness
criterion” of the images it produces.

On the other hand, you might train a model to best match a ground-truth
“original sharp image.”

In the former case you could well get deblurred images that look sharp but
do differ somewhat from the “original sharp image.”

Best.

K. Frank

I trained my model by putting (2) as my data and (1) as my ‘labels’, using the SRCNN model.
Hence I was expecting it to be sharper :confused:

Hi Hong!

You haven’t told us what loss criterion you have used for training (nor
the relationship of your training data to the “original sharp image” and
“blurred image” of your original post).

If you trained, for example, with MSELoss, then you would expect
your model to predict “deblurred” images that are closer to your
“original sharp image,” as measured by MSELoss, than is the “blurred
image” you input to your model.

But you say that your “deblurred image” has a greater MSELoss than
the “blurred image”.

If your “original sharp image” / “blurred image” pair are not in your
training set but are in a validation set that is of the same character
as your training set, then you could be overfitting so that you get
good results for your training set (lower MSELoss) but worse results
(higher MSELoss) for samples not in your training set.

If your “original sharp image” / “blurred image” pair differs in character
from your training data (as distinct from merely not being in your training
set), then you might not be overfitting, per se, but your model might
not be generalizing to data samples with somewhat different character.

Or, you could just have a bug somewhere.

(If you’re not using a loss criterion that teaches your model to predict
deblurred images that match the unblurred images, as measured by
MSELoss,
then there is no real discrepancy here.)

One comment: Even if the deblurring problem is hard, it would not
be hard (unless you have a lossy model) for your model to learn to
output the blurry image that was input to the model as its prediction,
rather than output a deblurred image that has a higher MSELoss with
respect to the unblurred target image, because outputting the input
image would lower the loss.

If you are using MSELoss for your training, you might take a smallish
subset of your training data and verify that you can overfit your data,
which would mean that your model would predict deblurred images
with low MSELosss, at least for samples in the training subset you
used to overfit.

Note, sharpness is, so to speak, a perceptual characteristic. Producing
an image that looks sharp is not the same thing as recovering the
original unblurred image. One could easily imagine a deblurring model
(that was trained using some sort of sharpness criterion) that produces
nice, sharp images that deviate more from the ground-truth original
sharp image (as measured, say, by MSELoss) than does the blurred
image input to the model.

Best.

K. Frank

Hi Frank!
Thanks for the reply.
I’ve used MSELoss, and the image that I am testing is actually in the training set.
Hence, in theory, shouldn’t the MSE Loss be actually smaller for the Deblurred image compared to the blurred image?

Thank you!

Hi Hong!

Yes, your reasoning is correct – the deblurred image should give you a
lower MSELoss (because that’s what you’ve trained your model to do).

That’s why I suggest trying to overfit part of your training set. If you take
an adequately small subset of your training data and train your model with
an adequately large number of epochs, your model should be able to
“memorize” the “original sharp images” that you are using for your targets
and then simply reproduce them close to exactly (which will then give an
MSELoss very close to zero).

If you can’t overfit in this way, you either have a bug somewhere or you have
a very weird model (or you aren’t training enough).

First make sure you can overfit and then go on to the next step of training
a useful model (that isn’t overfit) that can produce deblurred images that
do have a lower MSELoss than the blurred-image inputs (and that also
hopefully look sharp to the eye) for data in a validation set that isn’t used
for training.

Best.

K. Frank

To over fit the model, should I simply increase the number of epochs? Or are there any other methods I can use to overfit my model?
Thank you!