I am currently building a ANN model using pytorch on a diabetes dataset so first i converted every feature into float64 and then did the front prop and back prop but during the training loop i am getting an error mat1 and mat2 must have the same dtype, but got Double and Float.
so how can i fix it and whats the problem here
Hey! It’s probably because your features are stored in Double (torch.float64) tensors, and your model parameters are stored on the default Float (torch.float32) tensors. Both must be of the same precision for the forward pass to work.
You can do:
model = model.to(dtype=torch.float64)
to use float64 precision for your model (this will use twice as much memory though), or you can use:
input = input.to(dtype=torch.float32)
to use float32 precision for your inputs (or simply create them as float32 directly rather than float64).
Did you maybe only change the data to float64 and not the model itself??