There is the context manager
profile (PyTorch Profiler — PyTorch Tutorials 1.11.0+cu102 documentation) which allows us to monitor cpu and gpu times. I’d like to integrate it into my script such that I can turn it on / off flexibly:
from torch.profiler import profile use_prof = True with profile(enabled=use_prof,...): # ...
Only there is no
enabled option, as opposed to the
torch.autograd.profile where such option exists (Automatic differentiation package - torch.autograd — PyTorch 1.11.0 documentation).
Is it possible / is there a workaround to flag the
torch.profiler.profile somehow, just like