"batch_norm" not implemented for 'Half

My neural net was working perfectly, and now I’m not sure why I am getting this error upon this line:

y_pred = model(train_categorical, train_numerical)

Any idea how this might be fixed/what it is indicating is wrong? Thansk!
Here is the offending stack trace:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-30-9aa7542c12d0> in <module>
     17 for i in range(epochs_per_training):
     18     i += 1
---> 19     y_pred = model(train_categorical, train_numerical)
     20     single_loss = loss_function(y_pred, train_outputs)
     21 

~/new_anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
    720             result = self._slow_forward(*input, **kwargs)
    721         else:
--> 722             result = self.forward(*input, **kwargs)
    723         for hook in itertools.chain(
    724                 _global_forward_hooks.values(),

<ipython-input-25-fd94d1dedadd> in forward(self, x_categorical, x_numerical)
     38         x = self.embedding_dropout(x)
     39 
---> 40         x_numerical = self.batch_norm_num(x_numerical)
     41         x = torch.cat([x, x_numerical], 1)
     42         x = self.layers(x)

~/new_anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
    720             result = self._slow_forward(*input, **kwargs)
    721         else:
--> 722             result = self.forward(*input, **kwargs)
    723         for hook in itertools.chain(
    724                 _global_forward_hooks.values(),

~/new_anaconda3/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py in forward(self, input)
    134             self.running_mean if not self.training or self.track_running_stats else None,
    135             self.running_var if not self.training or self.track_running_stats else None,
--> 136             self.weight, self.bias, bn_training, exponential_average_factor, self.eps)
    137 
    138 

~/new_anaconda3/lib/python3.7/site-packages/torch/nn/functional.py in batch_norm(input, running_mean, running_var, weight, bias, training, momentum, eps)
   2014     return torch.batch_norm(
   2015         input, weight, bias, running_mean, running_var,
-> 2016         training, momentum, eps, torch.backends.cudnn.enabled
   2017     )
   2018 

RuntimeError: "batch_norm" not implemented for 'Half'

Ok basically you can’t use float16. It has to just be float. good to know.

2 Likes