CDhere
April 9, 2024, 9:02pm
1
Hi! I’m trying out TF32 and mixed precision training. Can they be used at the same time? Namely, is setting the flags like this ok when I enable mixed precision training, or should I actually set them back to False
?
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True
Thanks!
You can keep TF32 enabled and would use it for convs and matmuls outside of the amp.autocast
region.
CDhere
April 9, 2024, 9:07pm
3
Just to check my understanding, the flags would only affect things outside the amp.autocast
context?
Yes, since inside the autocast
context (assuming it’s enabled) float16
or bfloat16
ops will be used for TensorCore-eligible operations.
1 Like
Makor
(Makor)
December 26, 2024, 10:10am
5
would the tf32 affect training, if wrap the all training in autocast but enabled=False
ptrblck
December 26, 2024, 5:45pm
6
If you are not enabling autocast
then TF32 operations can be used according to their setting.