Hi,
I am learning the Deep learning using pytorch toolkit. I have trained the sample model by pytorch python
version. I am trying to test the trained model on python C++ using example-app.cpp which is given in the
pytorch packages.
If I am using the below given sequences of code it is crashing at run time
Code sequences:-
auto f = CPU(kFloat).tensorFromBlob(float_buffer, {output_height,output_width});
std::vectortorch::jit::IValue inputs;
inputs.push_back(f);
auto output = module->forward(inputs).toTensor();
Error message :-
[ CPUFloatType{2,32} ]
terminate called after throwing an instance of ‘at::Error’
what(): Tensor that was converted to Variable was not actually a Variable (Variable at
But If I am using the below given sequences of code it is working well
std::vectortorch::jit::IValue inputs;
** at::Tensor b1 = torch::randn({2, 32});**
** inputs.push_back(b1);**
** auto output = module->forward(inputs).toTensor();**
I feel that C++ standard format of features vectors converted into Tensor format (auto f = CPU(kFloat).tensorFromBlob(float_buffer, {output_height,output_width}); ) is wrong because it gives data type as [ CPUFloatType{2,32} ] but The function at::Tensor b1 = torch::randn({2, 32}) is giving data type as
[ Variable[CPUFloatType]{2,32} ]
is C++ standard format of features vector into Tensor format conversion correct ?
auto f = CPU(kFloat).tensorFromBlob(float_buffer, {output_height,output_width});
Please help me to how to resolve the above issue ?