Different inference results when running on arm64 and x86_64

sure:

  • load the model:
    module = torch::jit::load(fileName);

  • process:
    torch::Tensor result;
    std::vectortorch::jit::IValue inputs;
    at::Tensor tensor = torch::from_blob(data.data(), {1, 1, 11, 144});
    inputs.push_back(tensor);
    torch::autograd::AutoGradMode guard( false );
    result = module.forward(inputs).toTensor();

i have written unit tests and integrations tests that show that the results are equal to python. but once i run this on the iphone the results are different.