Converting vector of double to tensor, issues

I am trying to convert an std::vector to torch:Tensor with the following code:

torch::Tensor next_state = torch::from_blob( + 1), { k_input_size_ }).to(*device);

However, I have done some troubleshooting since my traning loop which uses this tensor crashes since it han nan values in it.

So I tried:

std::cout << << std::endl;
torch::Tensor next_state = torch::from_blob( + 1), { k_input_size_ }).to(*device);
std::cout << "next state is: " << next_state << std::endl;

Which at the iteration of the crash gives me:

0.0642382 0.0395936 0 0.219108 0.372894 0.422909 0.554452 0.302765 1
next state is: -1.7024e+35

Note that:

torch::Tensor current_state = torch::tensor(*device);

Gives me a tensor that looks the way I want it but then the program crashes when I try to feed it through the network, an issue I did not have with from_blob…


I think you should specify the size of tensor elements when using from_blob, like : torch::from_blob( + 1), { k_input_size_ }, torch::TensorOptions(torch::kFloat64))

Also if your device is cuda and if your data + 1) does not live in gpu memory you will have to clone the tensor before using to , like :


Thanks for the reply. I tried what you said with both kFloat64 and kDouble. However, I still get an error when I try to forward this tensor in the network:

Exception: Expected object of scalar type Double but got scalar type Float for argument #2 ‘mat2’ in call to _th_mm (checked_dense_tensor_unwrap at C:\w\1\s\windows\pytorch\aten\src\ATen/Utils.h:84)

Also, I’m not using GPU, only CPU.

The module parameters interacting with your input tensor should probably have the same type as the input tensor.
I think it would be easier to work with torch::kFloat tensor so you should convert your data to float before creating the input with from_blob

This worked and I will mark you post as the solution, however I am not totally happy with having to convert my double vector to a float.