Torch.cuda.amp vs Nvidia apex?

tl;dr torch.cuda.amp is the way to go moving forward.

We published Apex Amp last year as an experimental mixed precision resource because Pytorch didn’t yet support the extensibility points to move it upstream cleanly. However, asking people to install something separate was a headache. Extension building and forward/backward compatibility were particular pain points.

Given the benefits of automatic mixed precision, it belongs in Pytorch core, so moving it upstream has been my main project for the past six months. I’m happy with torch.cuda.amp. It’s more flexible and intuitive than Apex Amp, and repairs many of Apex Amp’s known flaws. Apex Amp will shortly be deprecated (and to be honest I haven’t been working on it for a while, I focused on making sure torch.cuda.amp covered the most-requested feature gaps).

Prefer torch.cuda.amp, early and often. It supports a wide range of use cases. If it doesn’t support your network for some reason, file a Pytorch issue and tag @mcarilli. In general, prefer native tools for versioning stability (that means torch.nn.parallel.DistributedDataParallel too) because they’re tested and updated as needed for each master commit or binary build.

Apex will remain as a source of utilities that can be helpful, e.g. fast fused optimizers, but forward+backward compatibility across all Pytorch versions can’t be guaranteed. Don’t take a dependency on Apex unless you want to try those.

30 Likes