How to convert the C++ standard format of features vector into Tensor format in pytorch c++

Hi,
I am learning the Deep learning using pytorch toolkit. I have trained the sample model by pytorch python
version. I am trying to test the trained model on python C++ using example-app.cpp which is given in the
pytorch packages.
If I am using the below given sequences of code it is crashing at run time
Code sequences:-
auto f = CPU(kFloat).tensorFromBlob(float_buffer, {output_height,output_width});
std::vectortorch::jit::IValue inputs;
inputs.push_back(f);
auto output = module->forward(inputs).toTensor();

Error message :-
[ CPUFloatType{2,32} ]
terminate called after throwing an instance of ‘at::Error’
what(): Tensor that was converted to Variable was not actually a Variable (Variable at

But If I am using the below given sequences of code it is working well
std::vectortorch::jit::IValue inputs;
** at::Tensor b1 = torch::randn({2, 32});**
** inputs.push_back(b1);**
** auto output = module->forward(inputs).toTensor();**
I feel that C++ standard format of features vectors converted into Tensor format (auto f = CPU(kFloat).tensorFromBlob(float_buffer, {output_height,output_width}); ) is wrong because it gives data type as [ CPUFloatType{2,32} ] but The function at::Tensor b1 = torch::randn({2, 32}) is giving data type as
[ Variable[CPUFloatType]{2,32} ]

is C++ standard format of features vector into Tensor format conversion correct ?
auto f = CPU(kFloat).tensorFromBlob(float_buffer, {output_height,output_width});
Please help me to how to resolve the above issue ?

Hi,

I think you have exactly the same error as this one: MNIST with pytorch c++ api.

Could you try to do that instead of your auto f?

at::Tensor f = torch::from_blob(float_buffer, at::IntList(sizes), options);
f = float_buffer.toType(at::kFloat);

also checkout my GitHub repo I did something very similar there in line 31:

greetings

1 Like

As per your suggestion i added your changes in my code and now it is working
thanks lot.