Access to input / output names in a model (torch::jit::script::Module)

Hi there,

I am working on a wrapper layer on top of PyTorch, in which the following methods need to be implemented:

std::vector<std::string> GetInputNames const;
std::vector<std::string> GetOutputNames const;
void SetInput(std::string, at::Tensor);
at::Tensor GetOutput(std::string);

It seems straight-forward to implement those in TensorFlow’s context. See an example:

  tensor_dict feed_dict = {
      {"input", data},

  std::vector<tensorflow::Tensor> outputs;
  TF_CHECK_OK(sess->Run(feed_dict, {"output", "dense/kernel:0", "dense/bias:0"},
                        {}, &outputs));

where I can save “input” as input key, data as input value, “output” as output key etc.

I wonder is there a built-in way to implement the counterpart for PyTorch?
I read through the post to convert PyTorch Torch Script model to ONNX, it seems to be doable, but I am not sure if it is the correct direction.

Thanks in advance!

1 Like

Any updates here? Got the same problem recently.

I got the same issue…