Torch acos cast to float32

Hey all.

I have a very weird issue that I can’t reproduce outside of my training loop so I apologize in advance for not providing a minimal reproducible example.

I have a tensor in float 16 on cuda, I passed it to torch.acos in my training loop and the return is a float32 …

I am debugging in the middle of my training step.

x = torch.zeros(10).half().to('cuda')
>>> torch.float32

This makes no sense to me. What could go wrong with my back-end for that to happened ? I can’t reproduce outside of this training loop

my torch version: ‘1.12.1+cu102’

I am using pytorch lightning not sure if this can introduce some issue somewhere …

Okay I found the issue :person_facepalming:, it is the autocast that is probably activated in pytorch lightning

import torch
with torch.autocast('cuda'):
    x = torch.zeros(10).half().to('cuda')
>>> torch.float32

Is it okay then to recast it automatically to half precision if I need to ? Is there a way I can ask for keeping half precision here ?

Yes, you can cast it back to float16 if your use case doesn’t need the numerical precision which will be lost by this operation. Alternatively, you could also disable autocast for this operation so that no casts will be done.

1 Like