Pytorch1.0 HalfTensor support

loss.backward()

File “/anaconda3/envs/pytorch0.4/lib/python3.6/site-packages/torch/tensor.py”, line 102, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File “/anaconda3/envs/pytorch0.4/lib/python3.6/site-packages/torch/autograd/init.py”, line 90, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: “add” not implemented for ‘torch.HalfTensor’

I tried to use HalfTensor to reduce memory overhead.
when i call loss.backward(), an error occurs which indicates ‘add’ operator not implemented for HalfTensor.

Are you trying to run your model on the CPU?
As far as I know FP16 is well supported on the GPU, while I think not so much on the CPU side.

1 Like
norm_layer(input_tensor.float().to('cuda'))

File “/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 489, in call
result = self.forward(*input, **kwargs)
File “/anaconda3/lib/python3.7/site-packages/torch/nn/modules/normalization.py”, line 158, in forward
input, self.normalized_shape, self.weight, self.bias, self.eps)
File “/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py”, line 1651, in layer_norm
torch.backends.cudnn.enabled)
RuntimeError: Expected object of backend CPU but got backend CUDA for argument #3 ‘tensor1’

I switch to a gpu environment, and find this error, it seems that self.weight got backend CUDA which is unexpected. I use this explicit call “input_tensor.float().to(‘cuda’)” to make sure input to be a cuda.FloatTensor
But it makes no difference.

It looks like your norm_layer is still on the CPU.
Could you check this and push it to a GPU if that’s the case?

To me it looks like you are converting a normalization layer (like batchnorm) to half precision. AFAIK, normalization layers should not be converted and the network should run in mixed precision instead.

Because your norm_layer is not cuda version.
You can revise your code like this,

norm_layer = norm_layer.to('cuda')
norm_layer(input_tensor.float().to('cuda'))