TFserving use prediction_service.proto for gRPC request/response ,and I know "LOADING A PYTORCH MODEL IN C++ " from this https://pytorch.org/tutorials/advanced/cpp_export.html ,like this:
// Create a vector of inputs.
std::vectortorch::jit::IValue inputs;
inputs.push_back(torch::ones({1, 3, 224, 224}));
// Execute the model and turn its output into a tensor.
at::Tensor output = module->forward(inputs).toTensor();
std::cout << output.slice(/dim=/1, /start=/0, /end=/5) << ‘\n’;
and create a vector of torch::jit::IValue
(a type-erased value type script::Module
methods accept and return) and add a single input.
Is it possible torch::jit::IValue can like PredictRequest or PredictResponse that can be SerializeToString ParseFromString ,in that way ,so I can deploy it more conveniently
Can libtorch C++ provide PB protocol (or any others protocol can be Serialize and Deserialize) as module->forward input and output parameters,or torch::jit::IValue provide Serialize and Deserialize method?