Accuracy calculation for inbalanced data

I have single-label, three-class classification problem.
First two classes is say dog and cat, and third one is neither of that (it can by bird fox or just blank picture).
I am after cat/dog detection when the third class dominates the data.
And so I try to calculate accuracy for my model.
My intuition tells me to count only first two classes, because this is what important to me.
It this approach correct or it does not matter?

I also counting only samples with probability more than 0.6 because I want to be biased against false positive.

My code:

def batch_accuracy(pred: torch.Tensor, test: torch.Tensor) -> tuple[float, float]:
    pred_prob = torch.nn.functional.softmax(pred, dim=1).to("cpu")
    test_cpu ="cpu")

    correct_count = 0.0
    total_count = 0.0

    batch_size = pred_prob.size(0)
    for i in range(batch_size):
        class_index = test_cpu[i].item()
        if class_index == 0 or class_index == 1:
            total_count += 1.0
            if pred_prob[i][class_index] > 0.6:
                correct_count += 1.0

    return correct_count, total_count