I am guessing this has something to do with not specifying the input for
model-runner
(the way we do, for instance, withimage-classifier
). But then again, there is no way to specify input formodel-runner
.
Yeah sorry for the confusion – you’ve diagnosed the problem correctly. As I noted in my other reply in the other thread, the ModelRunner is more of a toy, and your model must not have any inputs, and exactly one output.
I would suggest creating your own model loader/runner, perhaps initially based on ModelRunner
since it’s the simplest, which is customized for your model. It would have a single input with name input_data
and a single output named output
. You can look at ImageClassifier
to see how it creates its Caffe2ModelLoader
/ONNXModelLoader
with inputName
along with a Tensor Type inputImageType
, and then calls updateInputPlaceholders()
to update the Tensor for the input before running.