How to customize build torchscript model to be used in another code

I want to compile my model to be executed in the Python script running on our customers computers.
Currently my resnet model size is ~100MB but it needs torch, which requires 1.5GB of space.
Currently i am doing this series of commands:

model = torchvision.models.resnet50(pretrained=True)
model.eval()
example = torch.ones(1, 3, 224, 224)
traced_model = torch.jit.trace(model, example)
ops = torch.jit.export_opnames(model)
traced_model.save('traced_model.pt')
with open('model_ops.yaml', 'w') as output:
    yaml.dump(ops, output)

The question is how to continue from here in order to build a model i can use in another python/c script without the need to load the entire torch or libtorch packages, but only what is needed based on the model operations.

We currently don’t support this. This sounds like a reasonable feature request that is important for deployment. Please open a feature request over at https://github.com/pytorch/pytorch (or voice your support on a current issue if it exists) !

This sounds like it overlaps with some of the concerns of the Mobile userbase (e.g. building with only the used ops to reduce total size). Cc @David_Reiss

@Jeff_Smith - I agree. This is relevant for mobile, as well as for other end devices. However, considering the resources in modern smartphones, the need in other end devices (not mobile) is much more prominent. This is a major issue for deployment.

@richard - I saw some discussions about it for mobile deployment (https://pytorch.org/mobile/ios/#custom-build). However, as i answered to @Jeff_Smith , the need for such a solution for deployment on any end device is even greater than for modern smartphones.

Moved discussion to https://github.com/pytorch/pytorch/issues/32690#