Inference result is different between x86 and android libpytorch

Suppose I have a pretrained model weight, and if I export model with the following steps, I found that the inference result is different between x86 and android libpytorch。

....
model = Conv1d()
w = torch.load("weight.pt")
model.conv.weight.data = w
jit_model = torch.jit.script(model)
torch.jit.save(jit_model, "conv_2.pt")
...

Then load and inference with the conv_2.pt on x86 and android separately, and I found the two results are different.

for x86 pytorch output is:

tensor([[ 0.1662,  0.0207,  0.1688,  0.1911,  0.1231, -0.1205,  0.1114,  0.1749,
          0.2523,  0.0050],
        [-0.0863, -0.0477, -0.0551, -0.0679, -0.0117, -0.0239, -0.0428,  0.0200,
         -0.0516, -0.0071]])

while for android libpytorch output is:

(1,.,.) =
 Columns 1 to 9  0.0366 -0.0032  0.0441  0.0488  0.0381 -0.0448  0.0277  0.0619  0.0724
  0.1012  0.0999  0.0257  0.0421 -0.0487  0.1312  0.0322 -0.1556 -0.0342

Columns 10 to 10  0.0001
  0.0137
[ CPUFloatType{1,2,10} ]

I’m sure that the inputs are the same.

But if I change the code into:

model = Conv1d()
w = torch.load("weight.pt")
new_w = torch.flatten(torch.normal(0, 1, w.shape))
org_w = torch.flatten(w)
for idx, a in enumerate(org_w):
    new_w[idx] = a
new_w = new_w.reshape(w.shape)
model.conv.weight.data = new_w
jit_model = torch.jit.script(model)
torch.jit.save(jit_model, "conv_2.pt")

Then load and inference with the conv_2.pt on x86 and android separately, and the two results are the same.

Codes to reproduce the bug: https://www.dropbox.com/s/vldy1veaqgihk2j/pytorch_bug_report.zip?dl=0

new_w doesn’t seem to be used in the second code snippet so the difference seems to be the .data assignment.
Could you remove the .data usage and assign the weight via:

with torch.no_grad():
    model.conv.weight = nn.Parameter(w)

and rerun the test?

This is a mistake, and I had re-edit the code snippet.

And I tried this assignment method, but the results are different still.

I have same problem, I have a scripted model that returns a tensor, in android always all output elements are 0, but in python with the same input image it has non-zero answer.

I don’t know what should I do or try…

in Android:

final Tensor output = module.forward(IValue.from(in_prep)).toTensor();
output.getDataAsFloatArray()

Did you solve this problem?
I think I encounter almost same situation.