Converting vector of double to tensor, issues

Hello!
I am trying to convert an std::vector to torch:Tensor with the following code:

torch::Tensor next_state = torch::from_blob(replay_memory.at(trainingIDX + 1).current_state.data(), { k_input_size_ }).to(*device);

However, I have done some troubleshooting since my traning loop which uses this tensor crashes since it han nan values in it.

So I tried:

std::cout << replay_memory.at(trainingIDX+1).current_state << std::endl;
torch::Tensor next_state = torch::from_blob(replay_memory.at(trainingIDX + 1).current_state.data(), { k_input_size_ }).to(*device);
std::cout << "next state is: " << next_state << std::endl;

Which at the iteration of the crash gives me:

0.0642382 0.0395936 0 0.219108 0.372894 0.422909 0.554452 0.302765 1
next state is: -1.7024e+35
1.3785e+00
-6.2066e+14
1.2834e+00
0.0000e+00
0.0000e+00
-1.0163e+17
1.5941e+00
nan

Note that:

torch::Tensor current_state = torch::tensor(replay_memory.at(trainingIDX).current_state).to(*device);

Gives me a tensor that looks the way I want it but then the program crashes when I try to feed it through the network, an issue I did not have with from_blob…

Hello,

I think you should specify the size of tensor elements when using from_blob, like : torch::from_blob(replay_memory.at(trainingIDX + 1).current_state.data(), { k_input_size_ }, torch::TensorOptions(torch::kFloat64))

Also if your device is cuda and if your data replay_memory.at(trainingIDX + 1).current_state.data() does not live in gpu memory you will have to clone the tensor before using to , like :

torch::from_blob(...).clone().to(*device)

Thanks for the reply. I tried what you said with both kFloat64 and kDouble. However, I still get an error when I try to forward this tensor in the network:

Exception: Expected object of scalar type Double but got scalar type Float for argument #2 ‘mat2’ in call to _th_mm (checked_dense_tensor_unwrap at C:\w\1\s\windows\pytorch\aten\src\ATen/Utils.h:84)

Also, I’m not using GPU, only CPU.

The module parameters interacting with your input tensor should probably have the same type as the input tensor.
I think it would be easier to work with torch::kFloat tensor so you should convert your data to float before creating the input with from_blob

This worked and I will mark you post as the solution, however I am not totally happy with having to convert my double vector to a float.