Knowledge Graph Embedding model ranking results

I am running a model that computes the KGE embeddings of triples for a link prediction task. I am faced with a code, that I understand line by line, but I don’t understand how it serves the purpose of ranking predictions.

Usually, in link prediction, the model is tested on a triple such as (Obama, spouse, ?), and the model has to predict the tail at (?). What happens is that the model calculates the likelihood score of each entity in the dataset, then ranks all entities based on that score. Afterwards, the rank of the correct answer (in this case, Michelle Obama), is saved, and should be the output of the code I am struggling with.

What I don’t understand is the following: Why are they comparing scores in this code, and why are they summing them up??

In the below code, scores is a 2D array where each entry is a list with the likelihood score of each entity in the dataset (size: n_testing_triples x n_dataset_entities). targets is the score of the correct tail for each testing triple (size: n_testing_triples)

This seems to sum up the number of entries that have a score >= targets. Why is that useful for the ranking?

ranks[0: batch_size] += torch.sum(
                  (scores >= targets).float(), dim=1
.cpu()

Full code : KGEmb/base.py at master · HazyResearch/KGEmb · GitHub