What's the time complexity of Tensor.topk?

The following code finds the top-k elements of a tensor.

import torch
n, k = 100, 5
a = torch.randn(n)
b = a.topk(k)

I’m wondering what’s the time complexity of that operation.
E.g., a simple sorting and [:k] implementation would result in O(nlogn), while a quick-sort-styled partition would be O(n+k), also, when a heap is used the complexity would be O(n+klogn). Clearly, if the first algorithm is used in topk function we may lose some performance and need to manually choose other better implementations when finding top-k elements.
Thanks for any help!

1 Like

I’m also looking to know the time complexity for the topk operation, especially on the GPU. Thanks!

I found some clues

2 Likes

Thanks for the efforts! If these are indeed the backends used by the topk function, it points to an O(n+k) implementation.

1 Like

Thanks for the time complexity. Given the time complexity being O(n + k), do you know how I could estimate the FLOPs of topk? (do I just plug in n and k)?