Error when doing grad_clip_norm_

I got the following error when doing grad_clip, any idea why it happens?


Traceback (most recent call last):
  File "main.py", line 230, in <module>
    exp.run()
  File "main.py", line 162, in run
    train_loss, train_acc = self.model.train_epoch(self.train_dataloader)
  File "/share/data/lang/users/zeweichu/universal-classification/927/model.py", line 122, in train_epoch
    torch.nn.utils.clip_grad_norm_([p for p in self.network.parameters() if p.requires_grad], self.args.grad_clipping)
  File "/share/data/speech/zewei/anaconda3/lib/python3.6/site-packages/torch/nn/utils/clip_grad.py", line 29, in clip_grad_norm_
    total_norm += param_norm ** norm_type
RuntimeError: Expected object of type torch.cuda.FloatTensor but found type torch.cuda.DoubleTensor for argument #4 'other'

Could you check the types of your model and self.args.grad_clipping?
One argument is a DoubleTensor while it should be a FloatTensor or vice versa.

This is a common issue, if you use numpy arrays and transform them to torch.tensors, as numpy uses float64 as the default dtype.

It seems that some of your parameters are of double dtype and some are of float dtype. This is currently not supported unfortunately.

I submitted an issue on this at https://github.com/pytorch/pytorch/issues/12159