Binary classification shows 99% for totally different input

I did binary classification(dog&cat) output is between 0 and 1. It works perfectly with two classes.
But when I test totally different image(human, nature, etc) output is closer to 1(class1) or closer to 0(class2).

I think it should show around 0.5. because image doesn’t look like any class.

could you explain reason? is there something should I know more about classification?

Hi Sherzod!

Here’s my speculative way of looking at this:

You train your classifier to predict one of two classes. When it gets it
right, it gets rewarded for predicting that class with high certainty, so
there is some bias towards making high-certainty predictions. Because
you don’t train with not-cat, not-dog images, there is no penalty for
predicting a not-cat, not-dog image – say an image of a person with a
vaguely dog-like face – with high certainty as a dog.

If your actual use case is that at inference time your model will be presented
with cat, dog, and neither images and it’s important that you not predict
the neither images as cat or dog, then you need train you model to get
the neither images right.

So, in this case, you would want to train a three-class classifier (rather
than a binary classifier), where your three classes would be “cat,” “dog,”
and “everything else.” And your training would include not only “cat”
and “dog” samples, but “everything else” (human, nature, etc.) samples,
as well.

If such a model trains well – and there’s no reason it shouldn’t – when
presented with an “everything else” image, it should predict “everything
else” with a probability close to one, and correspondingly, with probabilities
for “cat” and “dog” close to zero.


K. Frank


Thanks. I will try to use one more class.