64-bit Double math based inference in PyTorch

Hi,

I have used the default 32-bit float to train a network in PyTorch and saved the model state dict. I would like to load it for testing purpose and I would like all the math be done in double precision.

What is the way to do this?

I put the following in the beginning of my code:

torch.set_default_tensor_type(torch.DoubleTensor)
torch.set_default_dtype(torch.float64)

But I get this error message:

  File "lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward
    input = module(input)
  File "lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 81, in forward
    exponential_average_factor, self.eps)
  File "lib/python3.6/site-packages/torch/nn/functional.py", line 1656, in batch_norm
    training, momentum, eps, torch.backends.cudnn.enabled
RuntimeError: Expected tensor for argument #1 'input' to have the same type as tensor for argument #2 'weight'; but type torch.cuda.FloatTensor does not equal torch.cuda.DoubleTensor (while checking arguments for cudnn_batch_norm)

Appreciate your inputs, thanks!

You also need to convert your model parameters to be double. IE, call module.double();

1 Like

Thanks! Let me give it a shot.