Hi everyone, I’m trying to measure the time needed for a forward and a backward pass separately on different models from the PyTorch Model Zoo. I’m using this code:
I do 5 dry runs, then measure ten times each forward pass and backward pass, average and compute the std deviation. Something strange keeps happening: the first time I execute the code everything is fine, if I relaunch the script right after one of the model will typically have a std deviation much higher than the others. I’m talking of a standard deviation of the same order of magnitude of the average. If instead I let a consistent amount of time pass between two runs, everything is fine.
Any idea of what might be causing this?
(Also if you have any advice on how to measure this, it will be welcome )