TF32 flags when using AMP

Hi! I’m trying out TF32 and mixed precision training. Can they be used at the same time? Namely, is setting the flags like this ok when I enable mixed precision training, or should I actually set them back to False?

torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True

Thanks!

You can keep TF32 enabled and would use it for convs and matmuls outside of the amp.autocast region.

Just to check my understanding, the flags would only affect things outside the amp.autocast context?

Yes, since inside the autocast context (assuming it’s enabled) float16 or bfloat16 ops will be used for TensorCore-eligible operations.

1 Like

would the tf32 affect training, if wrap the all training in autocast but enabled=False

If you are not enabling autocast then TF32 operations can be used according to their setting.