ImageNet Example Accuracy Calculation

I was looking at the topk accuracy calculation code in the ImageNet example and I had a quick question.

def accuracy(output, target, topk=(1,)):
    """Computes the precision@k for the specified values of k"""
    maxk = max(topk)
    batch_size = target.size(0)

    _, pred = output.topk(maxk, 1, True, True)
    pred = pred.t()
    correct = pred.eq(target.view(1, -1).expand_as(pred))

    res = []
    for k in topk:
        correct_k = correct[:k].view(-1).float().sum(0, keepdim=True)
        res.append(correct_k.mul_(100.0 / batch_size))
    return res

Doesn’t the “sorted” parameter in the topk function have to be set to False in order to preserve the ordering that way when we do pred.eq our comparison is valid?

Thanks for taking time to answer this question.

The “sorted” parameter doesn’t affect the ordering of input samples which are the rows of pred, but it sorts the columns of pred that represent indices of the topk labels in the order [ top1 top2 top3 …topk ].

I’m a little confused as to the nature of this function’s output. Is it a list of accuracy values for each images tested or does it calculate some sort of mean of these values and output a singular value?

If it’s the former then could one achieve the latter by just returning res.mean() ?

Also can you just use topk=(3) for a top-3 accuracy for example, rather than topk=(3,)?

Thanks for any help on this.

I spent a bit time trying to understand this function because of a wrong assumption, here is my explanation line by line for future reference:
This function calculates precision out of k classes, if you have 10 classes in your classification and k=6 then your classifier will have 50% precision@6 if it can predict 3 classes correctly.

# INPUTS: output have shape of [batch_size, category_count]
#    and target in the shape of [batch_size] * there is only one true class for each sample
# topk is tuple of classes to be included in the precision
# topk have to a tuple so if you are giving one number, do not forget the comma
def accuracy(output, target, topk=(1,)):
    """Computes the accuracy over the k top predictions for the specified values of k"""
   #we do not need gradient calculation for those
    with torch.no_grad():
    #we will use biggest k, and calculate all precisions from 0 to k
        maxk = max(topk)
        batch_size = target.size(0)
    #topk gives biggest maxk values on dimth dimension from output
    #output was [batch_size, category_count], dim=1 so we will select biggest category scores for each batch
    # input=maxk, so we will select maxk number of classes 
    #so result will be [batch_size,maxk]
    #topk returns a tuple (values, indexes) of results
    # we only need indexes(pred)
        _, pred = output.topk(input=maxk, dim=1, largest=True, sorted=True)
    # then we transpose pred to be in shape of [maxk, batch_size]
        pred = pred.t()
   #we flatten target and then expand target to be like pred 
   # target [batch_size] becomes [1,batch_size]
   # target [1,batch_size] expands to be [maxk, batch_size] by repeating same correct class answer maxk times. 
   # when you compare pred (indexes) with expanded target, you get 'correct' matrix in the shape of  [maxk, batch_size] filled with 1 and 0 for correct and wrong class assignments
        correct = pred.eq(target.view(1, -1).expand_as(pred))
   """ correct=([[0, 0, 1,  ..., 0, 0, 0],
        [1, 0, 0,  ..., 0, 0, 0],
        [0, 0, 0,  ..., 1, 0, 0],
        [0, 0, 0,  ..., 0, 0, 0],
        [0, 1, 0,  ..., 0, 0, 0]], device='cuda:0', dtype=torch.uint8) """
        res = []
       # then we look for each k summing 1s in the correct matrix for first k element.
        for k in topk:
            correct_k = correct[:k].view(-1).float().sum(0, keepdim=True)
            res.append(correct_k.mul_(100.0 / batch_size))
        return res

Correct me if you see any mistake.

6 Likes

thanks for your explanation. It’s so good and helpful

1 Like

Thanks for the code. It was really helpful with all those comments.
I’m not sure which pytorch version it was, but on using pytorch 1.0.0, I was getting an error on

_, pred = output.topk(input=maxk, dim=1, largest=True, sorted=True)

So to make it work, I changed it to

_, pred = torch.topk(output, maxk, dim=1, largest=True, sorted=True)
1 Like

Thank you for the explanation. Any idea how we can calculate this accuracy for segmentation task? I did the same thing but it returns an error.

correct = pred.eq(target.view(1, -1).expand_as(pred))

RuntimeError: The expanded size of the tensor (10) must match the existing size (768000) at non-singleton dimension 1.  Target sizes: [1, 10].  Tensor sizes: [1, 768000]

Why would we call this function “accuracy” if it actually computes precision out of k classes? Is there an actual tested accuracy function somewhere that computes the percentage of labels correct in one of the top k outputs out of the total number of data points being tested?

Detailed explanation:


def accuracy(output: torch.Tensor, target: torch.Tensor, topk=(1,)) -> List[torch.FloatTensor]:
    """
    Computes the accuracy over the k top predictions for the specified values of k
    In top-5 accuracy you give yourself credit for having the right answer
    if the right answer appears in your top five guesses.

    ref:
    - https://pytorch.org/docs/stable/generated/torch.topk.html
    - https://discuss.pytorch.org/t/imagenet-example-accuracy-calculation/7840
    - https://gist.github.com/weiaicunzai/2a5ae6eac6712c70bde0630f3e76b77b
    - https://discuss.pytorch.org/t/top-k-error-calculation/48815/2
    - https://stackoverflow.com/questions/59474987/how-to-get-top-k-accuracy-in-semantic-segmentation-using-pytorch

    :param output: output is the prediction of the model e.g. scores, logits, raw y_pred before normalization or getting classes
    :param target: target is the truth
    :param topk: tuple of topk's to compute e.g. (1, 2, 5) computes top 1, top 2 and top 5.
    e.g. in top 2 it means you get a +1 if your models's top 2 predictions are in the right label.
    So if your model predicts cat, dog (0, 1) and the true label was bird (3) you get zero
    but if it were either cat or dog you'd accumulate +1 for that example.
    :return: list of topk accuracy [top1st, top2nd, ...] depending on your topk input
    """
    with torch.no_grad():
        # ---- get the topk most likely labels according to your model
        # get the largest k \in [n_classes] (i.e. the number of most likely probabilities we will use)
        maxk = max(topk)  # max number labels we will consider in the right choices for out model
        batch_size = target.size(0)

        # get top maxk indicies that correspond to the most likely probability scores
        # (note _ means we don't care about the actual top maxk scores just their corresponding indicies/labels)
        _, y_pred = output.topk(k=maxk, dim=1)  # _, [B, n_classes] -> [B, maxk]
        y_pred = y_pred.t()  # [B, maxk] -> [maxk, B] Expects input to be <= 2-D tensor and transposes dimensions 0 and 1.

        # - get the credit for each example if the models predictions is in maxk values (main crux of code)
        # for any example, the model will get credit if it's prediction matches the ground truth
        # for each example we compare if the model's best prediction matches the truth. If yes we get an entry of 1.
        # if the k'th top answer of the model matches the truth we get 1.
        # Note: this for any example in batch we can only ever get 1 match (so we never overestimate accuracy <1)
        target_reshaped = target.view(1, -1).expand_as(y_pred)  # [B] -> [B, 1] -> [maxk, B]
        # compare every topk's model prediction with the ground truth & give credit if any matches the ground truth
        correct = (y_pred == target_reshaped)  # [maxk, B] were for each example we know which topk prediction matched truth
        # original: correct = pred.eq(target.view(1, -1).expand_as(pred))

        # -- get topk accuracy
        list_topk_accs = []  # idx is topk1, topk2, ... etc
        for k in topk:
            # get tensor of which topk answer was right
            ind_which_topk_matched_truth = correct[:k]  # [maxk, B] -> [k, B]
            # flatten it to help compute if we got it correct for each example in batch
            flattened_indicator_which_topk_matched_truth = ind_which_topk_matched_truth.reshape(-1).float()  # [k, B] -> [kB]
            # get if we got it right for any of our top k prediction for each example in batch
            tot_correct_topk = flattened_indicator_which_topk_matched_truth.float().sum(dim=0, keepdim=True)  # [kB] -> [1]
            # compute topk accuracy - the accuracy of the mode's ability to get it right within it's top k guesses/preds
            topk_acc = tot_correct_topk / batch_size  # topk accuracy for entire batch
            list_topk_accs.append(topk_acc)
        return list_topk_accs  # list of topk accuracies for entire batch [topk1, topk2, ... etc]

ref: compute top1, top5 error using pytorch · GitHub

ref:
- torch.topk — PyTorch 1.8.0 documentation
- ImageNet Example Accuracy Calculation
- compute top1, top5 error using pytorch · GitHub
- Top k error calculation - #2 by Oli
- python - how to get top k accuracy in semantic segmentation using pytorch - Stack Overflow