I want to compile my model to be executed in the Python script running on our customers computers.
Currently my resnet model size is ~100MB but it needs torch, which requires 1.5GB of space.
Currently i am doing this series of commands:
model = torchvision.models.resnet50(pretrained=True) model.eval() example = torch.ones(1, 3, 224, 224) traced_model = torch.jit.trace(model, example) ops = torch.jit.export_opnames(model) traced_model.save('traced_model.pt') with open('model_ops.yaml', 'w') as output: yaml.dump(ops, output)
The question is how to continue from here in order to build a model i can use in another python/c script without the need to load the entire torch or libtorch packages, but only what is needed based on the model operations.