Error training in double precision

Hello,

I have a 3D-CNN which I would like to train using double precision float on my CPU.
After creating the model I turn it into a double precision one using model.double() but the forward pass fails with the error:

  File "/opt/anaconda3/envs/ann/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 480, in forward
    self.padding, self.dilation, self.groups)
RuntimeError: Expected object of scalar type Float but got scalar type Double for argument #2 'weight' in call to _thnn_conv3d_forward

However, if I check the type of the parameters and, in particular, of the layer that gives the error with:

 for param in self.compressionConv.state_dict():
     print(self.compressionConv.state_dict()[param].dtype)

I get a long list of torch.float64 which, I assume, is what I want.

Is there anything else I need to do to make the model use double precision.

Thanks!
Helios

Hi,

Did you properly convert the input to double as well by doing input = input.double()? (this is not an inplace operation on Tensors, you have to use the result).

Hi,

great, that fixed it!
I didn’t do it because I was using instead

torch.set_default_dtype(torch.float64)

and I understood that the effect of that was to make torch always use double precision tensors. If not, what is that command doing?

Thanks a lot!
Helios

It does unless you specify ortherwise :wink:
Doing torch.tensor([1, 2]) will still give you a long type for example.