You haven’t told us what loss criterion you have used for training (nor
the relationship of your training data to the “original sharp image” and
“blurred image” of your original post).
If you trained, for example, with
MSELoss, then you would expect
your model to predict “deblurred” images that are closer to your
“original sharp image,” as measured by
MSELoss, than is the “blurred
image” you input to your model.
But you say that your “deblurred image” has a greater
the “blurred image”.
If your “original sharp image” / “blurred image” pair are not in your
training set but are in a validation set that is of the same character
as your training set, then you could be overfitting so that you get
good results for your training set (lower
MSELoss) but worse results
MSELoss) for samples not in your training set.
If your “original sharp image” / “blurred image” pair differs in character
from your training data (as distinct from merely not being in your training
set), then you might not be overfitting, per se, but your model might
not be generalizing to data samples with somewhat different character.
Or, you could just have a bug somewhere.
(If you’re not using a loss criterion that teaches your model to predict
deblurred images that match the unblurred images, as measured by
MSELoss, then there is no real discrepancy here.)
One comment: Even if the deblurring problem is hard, it would not
be hard (unless you have a lossy model) for your model to learn to
output the blurry image that was input to the model as its prediction,
rather than output a deblurred image that has a higher
respect to the unblurred target image, because outputting the input
image would lower the loss.
If you are using
MSELoss for your training, you might take a smallish
subset of your training data and verify that you can overfit your data,
which would mean that your model would predict deblurred images
MSELosss, at least for samples in the training subset you
used to overfit.
Note, sharpness is, so to speak, a perceptual characteristic. Producing
an image that looks sharp is not the same thing as recovering the
original unblurred image. One could easily imagine a deblurring model
(that was trained using some sort of sharpness criterion) that produces
nice, sharp images that deviate more from the ground-truth original
sharp image (as measured, say, by
MSELoss) than does the blurred
image input to the model.