Gives me a tensor that looks the way I want it but then the program crashes when I try to feed it through the network, an issue I did not have with from_blob…
I think you should specify the size of tensor elements when using from_blob, like : torch::from_blob(replay_memory.at(trainingIDX + 1).current_state.data(), { k_input_size_ }, torch::TensorOptions(torch::kFloat64))
Also if your device is cuda and if your data replay_memory.at(trainingIDX + 1).current_state.data() does not live in gpu memory you will have to clone the tensor before using to , like :
Thanks for the reply. I tried what you said with both kFloat64 and kDouble. However, I still get an error when I try to forward this tensor in the network:
Exception: Expected object of scalar type Double but got scalar type Float for argument #2 ‘mat2’ in call to _th_mm (checked_dense_tensor_unwrap at C:\w\1\s\windows\pytorch\aten\src\ATen/Utils.h:84)
The module parameters interacting with your input tensor should probably have the same type as the input tensor.
I think it would be easier to work with torch::kFloat tensor so you should convert your data to float before creating the input with from_blob