RuntimeError: "lu_cuda" not implemented for 'Half'

Hi All,

I’ve been starting to run my code on a GPU and started to change the default dtype via torch.set_default_type(torch.half). However, because my model uses a torch.slogdet function within it. It doesn’t seem to work, I assume because torch.half isn’t supported by torch.slogdet? (Or is to do with LU decomposition that might be happening under the hood within torch.slogdet?)

RuntimeError: "lu_cuda" not implemented for 'Half'

Is there a potential work around for this? Or is this something that’ll be added within the future?

Thank you in advance!

Linear algebra methods are usually unstable in reduced precision, so you should use float32 for these kind of operations.
I don’t think float16 support will be added to lu_cuda for the aforementioned stability reasons.

1 Like

Thanks for the response and explanation!

Do you think it might be possible to use the automatic mixed-precision module to get any speed benefits with the linear layers I used but keep torch.slogdet as float32?

Thanks!

Yes, that should be possible. The automatic-mixed precision autocast should use float32 for slogdet, if I’m not mistaken, but you would most likely need to remove the usage of set_default_type.

1 Like