Optimize_for_mobile exports are double the size of normal torchscript

I tried exporting a model for PyTorch Lite to use it on mobile.

Before I just used

script = torch.jit.trace(model, dummy_input)
script.save("model.pt")

which resulted in a 46 MB file.

But with

script = torch.jit.trace(model, dummy_input)
script_opt = optimize_for_mobile(script)
script_opt._save_for_lite_interpreter("model.ptl")

I get a resulting size of 92 MB.

I discovered that optimize_for_mobile is the problem.
How does it make the export so much bigger?
Does it just optimize for speed not for size?

Seems like the constants get doubled. cc @cccclai could you take a look?

Any update on this? Would be great to get the intended optimizations.

@Martin_Yuan @cccclai Can you follow up on this?

It should have been resolved by @cccclai in [After fix] Reuse constant and bump bytecode to v5 by cccclai · Pull Request #59722 · pytorch/pytorch · GitHub

2 Likes

I can confirm that with current nightly build it does not double in size anymore.

But I wonder a little bit because the release notes state:
“Major improvements in on-device binary size with Mobile Interpreter” but neither model exports are smaller nor library binaries on Android, which is still around 30-40MB (haven’t checked iOS yet). What exactly do you mean with binary size?

Do you have any concrete numbers that show the improvement?

@cccclai Could you elaborate on this?

Hi Erik, the binary size optimization is for pytorch library (runtime), not model file. You need to use custom build to only include ops that are needed by your model. Please check this tutorial:

https://pytorch.org/tutorials/recipes/mobile_interpreter.html#how-to-use-mobile-interpreter-custom-build

1 Like

Alright! Thanks for the response!

1 Like

I found out that the problem lies in the “Conv packed params hoisting” and “INSERT_FOLD_PREPACK_OPS” steps, whatever that means…
The following code optimizes the model for mobile without doubling the model’s size by blacklisting those optimization steps, this works without the nightly build:

import torch
from torch.utils.mobile_optimizer import optimize_for_mobile
from torch._C import MobileOptimizerType

torchscript_model = torch.jit.script(model)
torchscript_model = optimize_for_mobile(
    torchscript_model,
    optimization_blocklist={
            MobileOptimizerType.HOIST_CONV_PACKED_PARAMS,
            MobileOptimizerType.INSERT_FOLD_PREPACK_OPS
    },
)
if use_lite:
    torchscript_model._save_for_lite_interpreter("model.ptl")
else:
    torch.jit.save(torchscript_model, "model.pt")
1 Like

Is this still a problem?