Torch operation time measurement using benchmark.Timer

When I want to measure the torch operation execution time by using torch.utils.benchmark.Timer

    globals = {
        "a": a,
        "b": b,
    }

    results.append(
        benchmark.Timer(
            stmt="torch.matmul(a, b)",
            globals = globals,
            sub_label=sub_label1,
            description="pytorch_gemm",
        ).blocked_autorange(min_run_time=1))

    compare = benchmark.Compare(results)
    compare.print()

result:

Times are in microseconds (us).

[-------------------  ------------------]
                          |  pytorch_gemm
1 threads: ------------------------------
      MKN=(10x4096x6144)  |      58.7    

Times are in microseconds (us).

The precision of the results is only displayed to one decimal place. How can I increase the displayed precision? like 58.7654?

You can use compare.trim_significant_figures() but it won’t necessarily give you more digits of precision, but that also should mean that there’s too much noise for the extra digits to be meaningful. The formatting determines the number of significant digits based on the confidence intervals from the measurements.

Without trim_significant_figures, as you observe, you will always have a single digit after the decimal.

Thank you very much for your reply. After testing, I found that this function does not improve display precision but rather decreases precision. For example, the following results show:
before:
MKN=(128x4096x6144) | 80.6 | 66.3 | 65.5 | 149 | 107
after:
MKN=(128x4096x6144) | 81 | 66 | 66 | 149 | 107

So I understand that there is no universal algorithm to improve display accuracy because the results are calculated internally by themselves. Is that correct?

So I understand that there is no universal algorithm to improve display accuracy because the results are calculated internally by themselves. Is that correct?

Yeah the results measured are not precise enough for the extra digits shown to be useful