I have come across a very usual error RuntimeError: expected scalar type Double but found Float
. But the tensor is already double
. So probably it comes from some different place which I don’t see…
It comes from here
h = self.batch_norms[layer](h)
Again h
is float64
(or double) and layer
is int
.
By the way, self.batch_norms
is simple toch.nn.ModuleList
, i.e. self.batch_norms = torch.nn.ModuleList()
declared in constructor.
Here is a full traceback
RuntimeError Traceback (most recent call last)
C:\Users\ROSTYS~1\AppData\Local\Temp/ipykernel_6476/2774919388.py in <module>
1 for data in loader:
----> 2 out = model(data.x, data.edge_index, deg)
~\anaconda3\envs\resal\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
C:\Users\ROSTYS~1\AppData\Local\Temp/ipykernel_6476/730682502.py in forward(self, x, edge_index, deg)
58 h = self.convs[layer](h=h, edge_index=edge_index, deg = deg)
59 h = self.activation(h)
---> 60 h = self.batch_norms[layer](h)
61
62 if self.task == "node":
~\anaconda3\envs\resal\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
~\anaconda3\envs\resal\lib\site-packages\torch\nn\modules\batchnorm.py in forward(self, input)
165 used for normalization (i.e. in eval mode when buffers are not None).
166 """
--> 167 return F.batch_norm(
168 input,
169 # If buffers are not to be tracked, ensure that they won't be updated
~\anaconda3\envs\resal\lib\site-packages\torch\nn\functional.py in batch_norm(input, running_mean, running_var, weight, bias, training, momentum, eps)
2279 _verify_batch_size(input.size())
2280
-> 2281 return torch.batch_norm(
2282 input, weight, bias, running_mean, running_var, training, momentum, eps, torch.backends.cudnn.enabled
2283 )
RuntimeError: expected scalar type Double but found Float
Does somebody know, where is the problem?
Thanks!