Proper way to enable and disable autograd profiler

Currently I use the following. Is there a better way to enable it without manually calling __enter__? Is it necessary (I came up with it when it seemed necessary, but now it was maybe refactored?)?

if args.profile_autograd:
    autograd_profiler = torch.autograd.profiler.profile()
    autograd_profiler.__enter__()

# model running

if args.profile_autograd:
    autograd_profiler.__exit__(None, None, None)
    autograd_profiler.export_chrome_trace(args.profile_autograd)

I don’t want to use with construct because I want to keep enabling the profiler under the flag and prefer not to factor out the model code in a separate function. enable()-kind of API exists for autograd itself, so I thought maybe it exists for the profiler as well. It also exists for nvprof: torch.cuda.profiler.start()

Doesn’t profile take an enabled optional argument?

with torch.autograd.profiler.profile(enabled=args.profile_autograd):

1 Like

It would start then without with construct also, right?

autograd_profiler = torch.autograd.profiler.profile(enabled=args.profile_autograd)
# model code
autograd_profiler.export_chrome_trace(args.profile_autograd)

Do you know if I can stop it explicitly? I could not find a method for that in docs: https://pytorch.org/docs/stable/autograd.html#torch.autograd.profiler.profile

No, you still want the context manager.

Given the migration to the Kineto-based profiler, is there a comparable solution to this problem (some way to create a no-op-ing profiler)? Seems like enabled is no longer part of the profile context manager API:

In [9]: torch.__version__
Out[9]: '1.11.0'

In [10]:     with torch.profiler.profile(
    ...:         activities=[
    ...:             torch.profiler.ProfilerActivity.CPU,
    ...:             torch.profiler.ProfilerActivity.CUDA,
    ...:         ], enabled=False
    ...:     ) as prof:
    ...:         print('hi')
    ...: 
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-10-6132ab91e726> in <module>
      3         torch.profiler.ProfilerActivity.CPU,
      4         torch.profiler.ProfilerActivity.CUDA,
----> 5     ], enabled=False
      6 ) as prof:
      7     print('hi')

TypeError: __init__() got an unexpected keyword argument 'enabled'

Thanks!

cc @tom

You are using the Kineto profiler via torch.profiler.profile which does not have the enabled argument.
(Note the missing .autograd namespace)

My (perhaps incorrect) understanding is that the autograd profiler is deprecated. Is the autograd based profiler still fully supported (and recommended for use)?

I’m not sure about its deprecation status, but I would probably use the newer profiler.

Good to know, thanks @ptrblck