Error while using 16-bit floats (.half())

I’m trying to run my code using 16-nit floats. I convert the model and the data to 16-bit with no problem, but when I want to compute the loss, I get the following error:

return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
RuntimeError: “nll_loss_forward_reduce_cuda_kernel_2d_index” not implemented for ‘Half’

I’m wondering if there is something that I have to do or simply my choice of loss function does not support 16-bit floats. Thanks!

1 Like

nn.NLLLoss and thus also nn.CrossEntropyLoss don’t support float16 tensors on the CPU, if I’m not mistaken, so you could either use the GPU or transform the model output tensor to float32 before calculating the loss.
Also, in case you don’t want to manually transform your tensors, use the mixed-precision utility via torch.cuda.amp/torch.autocast.

Thanks. I took your advice here and used torch.autocast, as following:

with torch.autocast('cuda'):
    ... (my code) ...
1 Like