When I convert torch model for mobile optimizer, it is giving different outputs. But the output is the same for torch model and jit model.
import torch
import torchvision
from torch.utils.mobile_optimizer import optimize_for_mobile
# model definition with two classes
model = torchvision.models.mobilenet_v3_large()
model.classifier[3] = torch.nn.Linear(in_features=1280, out_features=2)
model.load_state_dict(torch.load('model.ptl'))
model.eval()
# Optimize model for mobile
example = torch.rand(1, 3, 224, 224)
traced_script_module = torch.jit.trace(model, example)
traced_script_module_optimized = optimize_for_mobile(traced_script_module)
- INFERENCE TORCH MODEL
model.forward(torch.ones([1, 3, 224, 224], dtype=torch.float))
# Output
tensor([[-3.0561, 3.0894]], grad_fn=<AddmmBackward0>)
- INFERENCE JIT MODEL
traced_script_module.forward(torch.ones([1, 3, 224, 224], dtype=torch.float))
# Output
tensor([[-3.0561, 3.0894]], grad_fn=<AddmmBackward0>)
- INFERENCE OPTIMZED MODEL
traced_script_module_optimized.forward(torch.ones([1, 3, 224, 224], dtype=torch.float))
# Output
tensor([[-4.5466, 6.5033]])
I tested with several examples. The same result: after optimizing for mobile, the model’s output is different