Here’s my model code, which is basically the generative model from the fast neural style PyTorch example.
I converted it to TorchScript:
net = TransformerNet(alpha=0.3)
net.load_state_dict(torch.load("trained_models/starry_night_small.pth",
map_location='cpu'))
net.eval()
# Convert to TorchScript
ts_module = torch.jit.script(net, torch.ones(1, 3, 256, 256)) # I also tried jit.trace() but no luck
ts_module.save("trained_models/starry_night.zip")
I tested right in Python to see if eager mode and tracing give the same results, and strangely the results are the exact same (I thought there has to be a small error due to conversion):
example_input = torch.randn(1,3,256,256)
with torch.no_grad():
orig_out = net(example_input)
ts_out = ts_module(example_input)
diff = F.l1_loss(orig_out.flatten(), ts_out.flatten()).item()
print("Error: ", diff)
# output: 0.0
And when I brought the model onto an Android app, the model gave way different results.
Here’s the Android code:
float[] dummyInput = new float[3 * 5 * 5];
long[] shape = new long[]{1, 3, 5, 5};
Arrays.fill(dummyInput, 1.0f);
final Tensor inputTensor = Tensor.fromBlob(dummyInput, shape);
Tensor outputTensor = net.forward(IValue.from(inputTensor)).toTensor();
float[] rawPixels = outputTensor.getDataAsFloatArray();
// rawPixels = {0.38102132, 0.15379308, 0.21999769, 0.33937144, 0.24030314, 0.15130955, 0.3354013, 0.19232287,...}
while the results on Python for torch.ones()
is just:
ts_module(torch.ones(1,3,5,5)).flatten().unique()
# output: tensor([0.2949, 0.3554, 0.4184], grad_fn=<NotImplemented>)
Sorry for the lengthy post, but I would really appreciate it if someone points me in the right direction to debug this.
Thank you in advance!