Hi,
As usual, I create my model and load the saved weights using
torch_model = MyModel() #Create my model
state_dict = torch.load(model_weights_path)['state_dict']
torch_model.load_state_dict(state_dict)
I can successfully create a torchscript model from this torch.jit.script
which I then save, load and use in my C++ app. The model and weights are now stored together in one serialized file which we get from torch.jit.script
x = torch.randn(1, 3, 5, 512, 512)
traced_script_module = torch.jit.script(torch_model, x)
traced_script_module.save("torchscript_model.pt")
The workflow in C++ is different because the model and weights are together stored in torchscript_model.pt
. Is there any way to save the model and weights of torchsript model separately and use similar steps as python i.e first create the model and then load the weights?