Evaluate the reliability of a prediction

Hi, every one,

I am currently using the image data to predicate the some values, it is basically a problem of regression, now I want to evaluate the reliability of the prediction.
I saw some code to evaluate the model, but for every prediction which is made by the model, is it possible to do the evaluation?

Thanks a lot.

Do you have labels for every image?

I don’t really have the labels for the classification, but for every image I have a corresponding array.
Like:
img1, [0.1,0.6,0.8,1.5]
img2, [0.1,0.6,0.68,1.55]
img3, [0.1,0.75,0.8,10.5]

So , finally, the neural network that I trained can give me a array like:
imgTest, [0.3,0.6,0.5,0.9]

I want to evaluate the reliability of the prediction.

Divide your data to 80% of training data and 20% of evaluation data (some people use 90%-10%). To evaluate your model compare the prediction of the model and the label given to you for every image in the evaluation data, if it’s equal add 1 to a correct_counter. After running over all the example in the evaluation data, just make the following division: correct_counter/size(evaluation).
This is probably the most basic method to evaluate a model.

1 Like

Nice,to evaluate the model, I totally agree with you.
And is possible that I can evaluate the prediction that the model has made?

What do you mean evaluate the prediction? Images that you don’t have the labels to?

Yes, just for the new image

A model evaluation will give you probability of the model being right or wrong, which is probably the best you can get. Another approach is human evaluation, in which people are asked to evaluate how well the model did (it is common when using generative models).

1 Like

You are right, human evaluation will be a nice method :slight_smile: