I was wondering if it’s at all possible to efficiently convert all my model parameters from float32 to float64? I’ve pretrained my model in float32, but when running it I get a NaN error but I know that model works within float64. So, I’d ideally like to just reformat my dtype.
Is there a way to easily change my model parameters from float32 to float64?
Great that works for my model, but there are one or two other variables that I need to convert (that are outside my model).
I’ve tried applying this to my optimizer, and it yields AttributeError: 'Adam' object has no attribute 'double' (when calling optim.double())· I’d assume this is because model subclasses from nn.Module whereas optim subclasses from torch.optim.Optimizer. Is there a way to apply this to the optimzier as well?
I believe there’s no need to do that with optimizer. To reinitialize optimizer on new version of the model do optimizer = optim.Adam(model.parameters())
Valid point, I did ‘solve’ that issue by using torch.set_default_dtype(torch.float64) but it does make sense to reinitialize the optimizer after pre-training!