Is there an efficient way to convert a model with float32 params to float64?

Hi All,

I was wondering if it’s at all possible to efficiently convert all my model parameters from float32 to float64? I’ve pretrained my model in float32, but when running it I get a NaN error but I know that model works within float64. So, I’d ideally like to just reformat my dtype.

Is there a way to easily change my model parameters from float32 to float64?

Any help is appreicated! :slight_smile:

1 Like

Hey @AlphaBetaGamma96

Have you tried to do model.double() that should convert model parameters to float64?

After that be aware that inputs of the model should also be tensors of double precision

1 Like

Great that works for my model, but there are one or two other variables that I need to convert (that are outside my model).

I’ve tried applying this to my optimizer, and it yields AttributeError: 'Adam' object has no attribute 'double' (when calling optim.double())· I’d assume this is because model subclasses from nn.Module whereas optim subclasses from torch.optim.Optimizer. Is there a way to apply this to the optimzier as well?

Thank you! :slight_smile:

I believe there’s no need to do that with optimizer. To reinitialize optimizer on new version of the model do optimizer = optim.Adam(model.parameters())

Valid point, I did ‘solve’ that issue by using torch.set_default_dtype(torch.float64) but it does make sense to reinitialize the optimizer after pre-training!

1 Like