Error in wrapping multiple Scripted models into a New TorchScript model

I’m trying to wrap two scripted model(from torch.jit.script) into a new torchscript model in my use case, I’ve narrow down the problems as below:

(noted that, cls: classification model, mask: segmentation model)
1. res34(cls) + res34(cls)  => Pass (Same model arch and same weight(same epoch))
2. res34(cls) + res34(cls)  => Pass (Same model arch and diff weight(diff epoch))
3. res18(cls) + res34(cls)  => Fail (Same model arch and diff weight(diff epoch))
4. res34(cls) + res34(mask) => Fail (Diff model structure)

Before getting into the snippet, to be mentioned is that, two failed cases (3, 4) get different error message.
Looking forward to the answer. Thanks!!

(PyTorch Version: 1.4.0)

class TestNet(nn.Module):
    def __init__(self):
        super(TestNet, self).__init__()
        self.model1 = torch.jit.load(SAVE_MODEL_PATH)       # res 34
        #self.model2 = torch.jit.load(SAVE_MODEL_PATH)      # res 34
        #self.model2 = torch.jit.load(SAVE_MODEL_PATH2)     # res 18
        #self.model2 = torch.jit.load(SAVE_MODEL_PATH3)     # res 34 different weight
        self.model2 = torch.jit.load(SAVE_MODEL_PATH4)      # mask
    def forward(self, x:torch.Tensor)->torch.Tensor:
        return self.model1(x)[0]
def test_ensemble():
    input640 = torch.rand(1, 3, 640, 640).cuda()
    test_model = TestNet()
    # build script model
    test_model_libtorch = torch.jit.script(test_model)
    test_model_libtorch.save(SAVE_MODEL_PATH)
    test_model_libtorch = torch.jit.load(SAVE_MODEL_PATH).cuda()
    output = test_model_libtorch(input640)
    exit(0)
res34 + res34  => OK
res34 + res34 different weight  => OK
res34 + res18  =>  fail when forward
RuntimeError: input.isTensor() INTERNAL ASSERT FAILED at /pytorch/torch/csrc/jit/argument_spec.h:89, please report a bug to PyTorch. Expected Tensor but found Bool (addTensor at /pytorch/torch/csrc/jit/argument_spec.h:89)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7fa6372ff193 in /usr/local/lib/python3.7/dist-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x4059e33 (0x7fa63b9b3e33 in /usr/local/lib/python3.7/dist-packages/torch/lib/libtorch.so)
frame #2: torch::jit::ArgumentSpecCreator::create(bool, std::vector<c10::IValue, std::allocator<c10::IValue> > const&) const + 0x230 (0x7fa63b9ae8d0 in /usr/local/lib/python3.7/dist-packages/torch/lib/libtorch.so)
frame #3: <unknown function> + 0x40848fb (0x7fa63b9de8fb in /usr/local/lib/python3.7/dist-packages/torch/lib/libtorch.so)
frame #4: <unknown function> + 0x407b8c1 (0x7fa63b9d58c1 in /usr/local/lib/python3.7/dist-packages/torch/lib/libtorch.so)
frame #5: torch::jit::Function::run(std::vector<c10::IValue, std::allocator<c10::IValue> >&) + 0x60 (0x7fa63bc9a480 in /usr/local/lib/python3.7/dist-packages/torch/lib/libtorch.so)
frame #6: <unknown function> + 0x7b4a8b (0x7fa69551ca8b in /usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_python.so)
frame #7: <unknown function> + 0x7b53bf (0x7fa69551d3bf in /usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_python.so)
frame #8: <unknown function> + 0x774d76 (0x7fa6954dcd76 in /usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_python.so)
frame #9: <unknown function> + 0x295a74 (0x7fa694ffda74 in /usr/local/lib/python3.7/dist-packages/torch/lib/libtorch_python.so)
<omitting python frames>
frame #13: python() [0x530583]
frame #18: python() [0x530547]
frame #25: python() [0x62d082]
frame #28: python() [0x606505]
frame #30: __libc_start_main + 0xf0 (0x7fa6991a9830 in /lib/x86_64-linux-gnu/libc.so.6)
res34 + mask  =>  fail when jit.load
  test_model_libtorch = torch.jit.load(SAVE_MODEL_PATH).cuda()
  File "/usr/local/lib/python3.7/dist-packages/torch/jit/__init__.py", line 235, in load
    cpp_module = torch._C.import_ir_module(cu, f, map_location, _extra_files)
IndexError: Argument passed to at() was not in the map.

Thanks for the report! Since this is a bug report, do you mind filing an issue on GH and we can follow up there? Over there, I will probably ask you for a script that I can run that will reproduce the problem. Thanks :slight_smile:

Thanks for ur prompt attention.
Will prepare the script and file an issue on GH!