Currently I use the following. Is there a better way to enable it without manually calling __enter__? Is it necessary (I came up with it when it seemed necessary, but now it was maybe refactored?)?
if args.profile_autograd:
autograd_profiler = torch.autograd.profiler.profile()
autograd_profiler.__enter__()
# model running
if args.profile_autograd:
autograd_profiler.__exit__(None, None, None)
autograd_profiler.export_chrome_trace(args.profile_autograd)
I don’t want to use with construct because I want to keep enabling the profiler under the flag and prefer not to factor out the model code in a separate function. enable()-kind of API exists for autograd itself, so I thought maybe it exists for the profiler as well. It also exists for nvprof: torch.cuda.profiler.start()
Given the migration to the Kineto-based profiler, is there a comparable solution to this problem (some way to create a no-op-ing profiler)? Seems like enabled is no longer part of the profile context manager API:
In [9]: torch.__version__
Out[9]: '1.11.0'
In [10]: with torch.profiler.profile(
...: activities=[
...: torch.profiler.ProfilerActivity.CPU,
...: torch.profiler.ProfilerActivity.CUDA,
...: ], enabled=False
...: ) as prof:
...: print('hi')
...:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-10-6132ab91e726> in <module>
3 torch.profiler.ProfilerActivity.CPU,
4 torch.profiler.ProfilerActivity.CUDA,
----> 5 ], enabled=False
6 ) as prof:
7 print('hi')
TypeError: __init__() got an unexpected keyword argument 'enabled'
My (perhaps incorrect) understanding is that the autograd profiler is deprecated. Is the autograd based profiler still fully supported (and recommended for use)?