Based on the stacktrace the error is raised in:
F.relu(self.fc1(state))
so check the dtype
of state
and make sure it’s float32
.
Based on the stacktrace the error is raised in:
F.relu(self.fc1(state))
so check the dtype
of state
and make sure it’s float32
.
Hi,
Sorry for opening this thread but I am facing this issue and I am for some reason not able to solve using your method. This is my code:
log_mel = librosa.power_to_db(mel)
# print(mel)
print(log_mel.dtype)
log_mel = log_mel.astype(np.double)
print(log_mel.dtype)
log_mel = np.stack((log_mel, log_mel, log_mel))
img = torch.tensor(log_mel, dtype=torch.long)
# img = img.type(torch.LongTensor)
print(img.dtype)
And the output is:
float64
float64
torch.int64
However, when I input the tensor to a predefined VGG net (
model = torch.hub.load('pytorch/vision:v0.10.0', 'vgg16', pretrained=True
)
using: res = model(img)
,
I get this error:
RuntimeError Traceback (most recent call last)
in
1 print(img.dtype)
----> 2 res = model(img)4 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias)
452 _pair(0), self.dilation, self.groups)
453 return F.conv2d(input, weight, bias, self.stride,
→ 454 self.padding, self.dilation, self.groups)
455
456 def forward(self, input: Tensor) → Tensor:RuntimeError: expected scalar type Long but found Float
What should I do?
You won’t be able to pass LongTensor
s to modules expecting floating point inputs, so remove the img = torch.tensor(log_mel, dtype=torch.long)
transformation and keep it as a Double
or FloatTensor
. (Some modules expect integer types, such as nn.Embedding
which is not the case in your model since you are using a conv layer.)
I tried that. If I don’t transform it, then I get this error:
RuntimeError Traceback (most recent call last)
in
----> 1 res = model(img)4 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
→ 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = ,/usr/local/lib/python3.7/dist-packages/torch/nn/modules/container.py in forward(self, input)
137 def forward(self, input):
138 for module in self:
→ 139 input = module(input)
140 return input
141/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
→ 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = ,/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py in forward(self, input)
455
456 def forward(self, input: Tensor) → Tensor:
→ 457 return self._conv_forward(input, self.weight, self.bias)
458
459 class Conv3d(_ConvNd):/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias)
452 _pair(0), self.dilation, self.groups)
453 return F.conv2d(input, weight, bias, self.stride,
→ 454 self.padding, self.dilation, self.groups)
455
456 def forward(self, input: Tensor) → Tensor:RuntimeError: expected scalar type Double but found Float
I even tried img = torch.tensor(log_mel, dtype=torch.double)
but apparently, double
and float
are the same.
No, double
and float
are not the same as the former is float64
while the latter is float32
.
The new error message is raised since you are still mixing up dtypes
. Your model is using float32
(float
) parameters while the inputs are float64
(double
). Make sure to use the same dtype
in the inputs and the model and it should work.
How did I miss that while reading on double
.
Btw. in case you are using a GPU I would recommend to use flaot32
as float64
will give you a performance slowdown (unless you really need the additional precision).
yeah. I’ve converted to float32
from float64
. VGG
is set to float32
as default.
‘Expected floating point type for target with class probabilities, got Long’