Hi,
I am working on a project were I need to export my PyTorch model to LibTorch (to create an application). I traced the mode according to the PyTorch documentation and want to load it in C++. I have the following code:
torch::Tensor loadExample(const char *path)
{
// a grayscale picture, 280x280
int picture [280][280];
// load data from file, loadData is a custom method
picture = loadData(path);
// array to tensor
torch::Tensor dataTensor = torch::from_blob(picture, {280, 280}, at::kByte);
// add batch-dim so its size is 1, 280, 280
dataTensor = dataTensor.to(at::kFloat).unsqueeze(0);
return dataTensor;
}
int main(int argc, const char *argv[])
{
// argv[1] == <path-to-exported-script-module>
// argv[2] == <path-to-data>
torch::jit::script::Module model;
model = torch::jit::load(argv[1]);
// get the input
std::vector<torch::jit::IValue> input;
input.push_back(loadExample(argv[2]));
at::Tensor output = model.forward(input).toTensor();
std::cout << output;
}
It loads a serialized picture as sample input. Loading the image works 100%, but it does not matter what the input is, the model always returns the same output. Only changing the weights changes the output, which is completely different to the output of the traced PyTorch model. Any ideas what the error is? I have been trying to figure out what’s wrong for hours…