RuntimeError: expected scalar type Double but found Float, stuck for a while!

Hello,

I have pre-trained a neural net over multivariate time series and saved it, and when I load it and try to transfer it to another dataset, I get this error:

Traceback (most recent call last):

File “…\TL.py”, line 176, in
transfer_L(net, data, learning_rate, True, output_size)
File “…\TL.py”, line 138, in transfer_L
_ , loss = train(input_tensor, output_tensor, net, optimizer)
File “…\TL.py”, line 89, in train
output = net(input_tensor)
File “…\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\torch\nn\modules\module.py”, line 727, in _call_impl
result = self.forward(*input, **kwargs)
File “…\TL.py”, line 76, in forward
lstm_out, _ = self.lstm(input_tensor)
File “…\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\torch\nn\modules\module.py”, line 727, in _call_impl
result = self.forward(*input, **kwargs)
File “…\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\torch\nn\modules\rnn.py”, line 581, in forward
result = _VF.lstm(input, hx, self._flat_weights, self.bias, self.num_layers,
RuntimeError: expected scalar type Double but found Float ###

The TL.py is used for the Transfer Learning, by fine-tuning only the last layer of my network, and here is the function def transfer_L(…) that applies the TL:

net = torch.load(model_path)
input_size =len(households_train[0][0][0][0])
output_size = input_size
learning_rate = 0.0005
data = households_train
lastL = True
if lastL:
for param in model.parameters():
param.requires_grad = False
model.fc2 = nn.Linear(1000, output_size)
params_to_update = []
for name,param in model.named_parameters():
if param.requires_grad == True:
params_to_update.append(param)
optimizer = torch.optim.Adam(params_to_update, lr=learning_rate)
current_loss = 0
all_losses = []
plot_steps = 100
n_iters = len(data)
for i in range(n_iters):
input_tensor , output_tensor = data[i][0] , data[i][1]
_ , loss = train(input_tensor, output_tensor, net, optimizer)

I have tried fixing it (based on previous topics from StackOverflow and Pytorch) by:

  • Converting the input tensor to double with:
    input_tensor , output_tensor = input_tensor.type(torch.DoubleTensor) , output_tensor.type(torch.DoubleTensor)
    => does not solve the issue

  • Before converting the numpy array to torch.Tensor, doing: input = input.astype(np.double)

  • Converting the weights_to_upgrade to Double (because idk where the float problem comes from) by : params_to_update.append(param.double())
    => gave back : ValueError: can’t optimize a non-leaf Tensor

Nothing worked, and I am stuck for a while now… Please, any help is much needed!

Thanks in advance,

Aya

1 Like

Have you tried changing the hidden state to a double?

I actually changed the input and output tensors to float, and now it works (which is odd given the RuntimeError statement)
Thanks tho!

1 Like

I faced the same problem and converting it to float worked. Does anyone know why this happens?

1 Like

Since you’ve fixed the issue by transforming a tensor or model to float(), check its creation and narrow down why it was created as a DoubleTensor in the first place. Often this issue is caused by transforming numpy arrays to tensors, as the former uses float64 by default.

1 Like