Usage guidelines for torch.cuda.utilization()

I came across this torch.cuda.utilization() function to measure the GPU utilization. What’s the sample period between each measurement? If i want to measure the utilization of specific two operation do i manually add time.sleep() to ensure that utilization doesn’t get affected?

torch.Op1()
torch.cuda.utilization(0) # To get GPU utilization of Op1
time.sleep(1)
torch.Op2()
torch.cuda.utilization(0) #To get GPU utilization of Op2

Does this work roughly?

Internally a nvmlUtilization_t Struct will be queried which will return the utilization rate by using sampling periods. From the docs:

Each sample period may be between 1 second and 1/6 second, depending on the product being queried.