I’m using a TorchScript model to make decisions in my C++ code. To try to make it more efficient, I made a model class, which can load the model weights and be called to make predictions. This allows me to load the model a single time instead of every time I want to call forward. Since I plan on calling the model many times, this can lead to significant savings.
The model successfully calls forward twice, but on the third call, I receive the following error:
Unable to find target for this triple (no targets are registered)[1] 38685 IOT instruction (core dumped)
I’m not sure what is calling this since it is able to call forward twice. If I create the model every time I want to call forward, it works fine. Does anyone have any insight? If it helps, here is my model class. Note that the model is a PyTorch-Geometric model, I don’t think that will make a difference though.
gat::gat(std::string path){
try{
model = torch::jit::load(path);
loaded = true;
}
catch(const c10::Error &e){
loaded = false;
}
}
int gat::predict(std::vector<unsigned int> nodes,
std::vector<unsigned int> inEdges,
std::vector<unsigned int> outEdges){
//Need to declare it as int type to avoid conversion issues
auto opts = torch::TensorOptions().dtype(torch::kInt32);
//Need to convert it to a Float for the model
auto nodeTensor = torch::from_blob(nodes.data(), {(unsigned int)nodes.size()/67,67}, opts).to(torch::kFloat32);
auto inEdgeTensor = torch::from_blob(inEdges.data(), (unsigned int)inEdges.size(), opts).to(torch::kI64);
auto outEdgeTensor = torch::from_blob(outEdges.data(), (unsigned int)outEdges.size(), opts).to(torch::kI64);
auto edgeTensor = torch::stack({inEdgeTensor, outEdgeTensor});
auto edge_attrTensor = torch::from_blob(edge_attr.data(), edge_attr.size(), opts).to(torch::kFloat32);
std::vector<torch::jit::IValue> inputs;
inputs.push_back(nodeTensor);
inputs.push_back(edgeTensor);
auto out = model.forward(inputs).toTensor();
int choice = out.argmin().item<int>();
return choice;
}