Something is slowing down the iteration?

Hi, I got confused by observing some inconsistency regarding the time cost due to certain operations.

Case 1:

acc_start = time.time()
acc = torch.mean((predicted_labels==targets).float()).item()
train_acc_his.append(acc)
print("acc calculating time: ", time.time() - acc_start)

This returns 1.739304780960083.
However, I also printed the time for an entire iteration of mini batch like this:

for i, (inputs, targets) in enumerate(train_loader):
            iter_start = time.time()
            ...
           print("iter time: ", time.time()-iter_start)

, which returns around 2.210217237472534

Therefore, if I comment out the part for calculating the acc, the time for iteration should be 0.7, right? However, I still got the time like 2.210217237472534.

Is it wrong to test time like this?

Thank you!

If you are using the GPU, note that CUDA operations are asynchronous, so that you should call torch.cuda.synchronize() before starting and stopping the timer.

1 Like

Ah! I see! Thank you so much!