How to calculate the big O complexity of an operation

Is there anyway we can calculate the big O complexity of an operation like topk() when executing on CPU and GPU respectively?

One easy way of checking out the complexity is just comparing the runtimes for proportional sizes of the input. For instance:

python -m timeit --setup="import torch;x = torch.rand(10 ** 5)" "x.topk(10)" 
python -m timeit --setup="import torch;x = torch.rand(10 ** 6)" "x.topk(10)" 

For the cuda implementation we get:

python -m timeit --setup="import torch;x = torch.rand(10 ** 5).cuda()" "x.topk(10);torch.cuda.synchronize()" 
python -m timeit --setup="import torch;x = torch.rand(10 ** 6).cuda()" "x.topk(10);torch.cuda.synchronize()" 

I ran these tests several times on my machines, everything looks linear (with respect to the size of the vector).