From model-runner
I get the error ProtobufLoader.cpp line: 33 message: There is no tensor registered with name input_data
. My graph (ONNX) looks like this:
graph(%input_data : Float(1, 2)
%1 : Float(8, 2)
%2 : Float(8)
%3 : Float(8, 8)
%4 : Float(8)
%5 : Float(8, 8)
%6 : Float(8)
%7 : Float(1, 8)
%8 : Float(1)) {
%9 : Float(1, 8) = onnx::Gemm[alpha=1, beta=1, transB=1](%input_data, %1, %2), scope: FeedForwardNN/Linear
%10 : Float(1, 8) = onnx::Relu(%9), scope: FeedForwardNN/ReLU
%11 : Float(1, 8) = onnx::Gemm[alpha=1, beta=1, transB=1](%10, %3, %4), scope: FeedForwardNN/ReLU
%12 : Float(1, 8) = onnx::Relu(%11), scope: FeedForwardNN/ReLU
%13 : Float(1, 8) = onnx::Gemm[alpha=1, beta=1, transB=1](%12, %5, %6), scope: FeedForwardNN/ReLU
%14 : Float(1, 8) = onnx::Relu(%13), scope: FeedForwardNN/ReLU
%15 : Float(1, 1) = onnx::Gemm[alpha=1, beta=1, transB=1](%14, %7, %8), scope: FeedForwardNN/ReLU
%output : Float(1, 1) = onnx::Sigmoid(%15), scope: FeedForwardNN/Sigmoid
return (%output);
}
The command line looks like this:
/bin $ ./model-runner -model=./ffnn.onnx -network-name="ffnn" -cpu -emit-bundle=./ -verbose
I have confirmed that the graph (ONNX) is constructed correctly by importing it in Python, then using the caffe2
backend to run inference.
I have also tried creating the init_net.pb
and predict_net.pb
, but I get the same error when specifying *.pb
instead of *.onnx
I am guessing this has something to do with not specifying the input for model-runner
(the way we do, for instance, with image-classifier
). But then again, there is no way to specify input for model-runner
.
Any ideas would be appreciated.
Thanks!