Hello,
so I use some python library to generate data. I use python 3.7. Default floating point presicion is 64 bit. So the generated data is returned as numpy arrays with dtype=float64.
Traceback (most recent call last):
File "train.py", line 256, in <module>
train(TrainDL, model, loss_fn, optimizer)
File "train.py", line 210, in train
labels_pred = model(samples)
File "/home/user/venvs/MLGWSC-1/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/user/venvs/MLGWSC-1/lib/python3.7/site-packages/torch/nn/modules/container.py", line 139, in forward
input = module(input)
File "/home/user/venvs/MLGWSC-1/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/user/venvs/MLGWSC-1/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 298, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/home/user/venvs/MLGWSC-1/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 295, in _conv_forward
self.padding, self.dilation, self.groups)
RuntimeError: expected scalar type Double but found Float
Now my program is described in more detail in this thread How to make my CNN batch size independent
One small difference is that I now wrap my data in pytroch tensors and unsqueeze it:
class Dataset(torch.utils.data.Dataset):
def __init__(self, samples, labels):
self.samples = torch.tensor(samples)
self.labels = torch.tensor(labels)
def __len__(self):
return len(self.samples)
def __getitem__(self, i):
return self.samples[i].unsqueeze(0), self.labels[i].unsqueeze(0)
Now the data passed to the model i.e. the call labels_pred = model(samples)
is all of dtype=float64.
print(samples.dtype)
print(samples[0].dtype)
print(samples[0][0].dtype)
all levels of my data is float64.
according to torch.Tensor — PyTorch 1.10.0 documentation float64 should be treated as Double.
I can’t really see where I would have floats. Everything seems to be double precision.