Can Pytorch serving in c++ like tfserving accept proto?

TFserving use prediction_service.proto for gRPC request/response ,and I know "LOADING A PYTORCH MODEL IN C++ " from this https://pytorch.org/tutorials/advanced/cpp_export.html ,like this:

// Create a vector of inputs.
std::vectortorch::jit::IValue inputs;
inputs.push_back(torch::ones({1, 3, 224, 224}));

// Execute the model and turn its output into a tensor.
at::Tensor output = module->forward(inputs).toTensor();

std::cout << output.slice(/dim=/1, /start=/0, /end=/5) << ‘\n’;

and create a vector of torch::jit::IValue (a type-erased value type script::Module methods accept and return) and add a single input.
Is it possible torch::jit::IValue can like PredictRequest or PredictResponse that can be SerializeToString ParseFromString ,in that way ,so I can deploy it more conveniently

Can libtorch C++ provide PB protocol (or any others protocol can be Serialize and Deserialize) as module->forward input and output parameters,or torch::jit::IValue provide Serialize and Deserialize method?

1 Like

Please don’t try to tag certain people, as this might discourage others to provide an answer.

Hi @jun_yu could you elaborate a bit more on what you’re trying to do? It seems like you’re trying to deploy a PyTorch model using TFServing by serializing jitted values? I don’t have much experience with this workflow but perhaps you’ll find an answer in the torchserve gRPC docs https://github.com/pytorch/serve/blob/master/docs/grpc_api.md