Model always gives the same output with different inputs

Hi,
I am working on a project were I need to export my PyTorch model to LibTorch (to create an application). I traced the mode according to the PyTorch documentation and want to load it in C++. I have the following code:

torch::Tensor loadExample(const char *path)
{

  // a grayscale picture, 280x280
  int picture [280][280];
  // load data from file, loadData is a custom method
  picture = loadData(path);

  // array to tensor 
  torch::Tensor dataTensor = torch::from_blob(picture, {280, 280}, at::kByte);
  // add batch-dim so its size is 1, 280, 280
  dataTensor = dataTensor.to(at::kFloat).unsqueeze(0);

  return dataTensor;
}

int main(int argc, const char *argv[])
{


  // argv[1] == <path-to-exported-script-module>
  // argv[2] == <path-to-data>

  torch::jit::script::Module model;
  model = torch::jit::load(argv[1]);

  // get the input
  std::vector<torch::jit::IValue> input;
  input.push_back(loadExample(argv[2]));

  at::Tensor output = model.forward(input).toTensor();

  std::cout << output;
}

It loads a serialized picture as sample input. Loading the image works 100%, but it does not matter what the input is, the model always returns the same output. Only changing the weights changes the output, which is completely different to the output of the traced PyTorch model. Any ideas what the error is? I have been trying to figure out what’s wrong for hours…

I would first try some sanity checks to ensure that the tensor the model sees is changing between evaluations (e.g., with something like torch.sum across all the values). If this is the case and the model is a pretrained one, I would check that the corresponding data augmentations are applied such as any normalizations.

Thanks for your advice, it was a problem with torch::from_blob. It finally worked when I inserted the values via for-loops