Expected scalar type Double but found Float but everything has dtype=float64


so I use some python library to generate data. I use python 3.7. Default floating point presicion is 64 bit. So the generated data is returned as numpy arrays with dtype=float64.

Traceback (most recent call last):
  File "train.py", line 256, in <module>
    train(TrainDL, model, loss_fn, optimizer)
  File "train.py", line 210, in train
    labels_pred = model(samples)
  File "/home/user/venvs/MLGWSC-1/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/user/venvs/MLGWSC-1/lib/python3.7/site-packages/torch/nn/modules/container.py", line 139, in forward
    input = module(input)
  File "/home/user/venvs/MLGWSC-1/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/user/venvs/MLGWSC-1/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 298, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "/home/user/venvs/MLGWSC-1/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 295, in _conv_forward
    self.padding, self.dilation, self.groups)
RuntimeError: expected scalar type Double but found Float

Now my program is described in more detail in this thread How to make my CNN batch size independent
One small difference is that I now wrap my data in pytroch tensors and unsqueeze it:

class Dataset(torch.utils.data.Dataset):
    def __init__(self, samples, labels):
        self.samples = torch.tensor(samples)
        self.labels = torch.tensor(labels)
    def __len__(self):
        return len(self.samples)
    def __getitem__(self, i):
        return self.samples[i].unsqueeze(0), self.labels[i].unsqueeze(0)

Now the data passed to the model i.e. the call labels_pred = model(samples) is all of dtype=float64.


all levels of my data is float64.

according to torch.Tensor — PyTorch 1.10.0 documentation float64 should be treated as Double.

I can’t really see where I would have floats. Everything seems to be double precision.

try model = model.double() before forward.


If I use .float() like model(samples.float()) then it works, which adds to my confusion.

I really don’t think I should need to convert anything at all.

PyTorch uses float32 by default to initialize the model’s parameters, tensors etc.
As you’ve described the input tensors are float64 tensors, which will create a dtype mismatch.
You would thus either have to transform the input data to float32 or the model to float64.

Normally, model.double() is not recommended. double model/data means DOUBLE GRADIDENT AND DOUBLE MEMORY.

Unless double precision is important to you, the best solution is return self.samples[i].unsqueeze(0).float(), self.labels[i].unsqueeze(0).float().