Using Apex AMP with PyTorch optimizers causes Attribute Error

We recommend to use the native mixed-precision utility via torch.cuda.amp as described here. New features, such as the compatibility with 3rd party repositories (transformers in this case), won’t land in apex/amp, but in native amp instead.

1 Like