Torch.cuda.amp - How to set optimization level?

In CUDA/Apex AMP, you set the optimization level:

model, optimizer = amp.initialize(model, optimizer, opt_level="O1")

In the examples I read on PyTorch’s website, I don’t see anything analogous to this. How is this accomplished?

Native AMP is similar to the recommended O1 level and doesn’t use any other opt_levels.

Thanks, it would seem 01 is the way to go most of the time, so I don’t think I’m losing anything.

1 Like

Let us know, if you encounter any issues. :slight_smile:

I just have one issue I have seen, which I posted here: