Expected Tensor but got GenericList

Hi,
I converted the pytorch model to torch script.Loading the model using c++ was also successful.
But while doing the inference,I got such an error.

[W TensorImpl.h:1156] Warning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (function operator())
terminate called after throwing an instance of ‘c10::Error’
what(): Expected Tensor but got GenericList
Exception raised from reportToTensorTypeError at …/aten/src/ATen/core/ivalue.cpp:854 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits, std::allocator >) + 0x69 (0x7f54c54cc9f9 in /home/Desktop/libtorch-cxx11-abi-shared-with-deps-1.9.0+cpu/libtorch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) + 0xd2 (0x7f54c54c9012 in /home/Desktop/libtorch-cxx11-abi-shared-with-deps-1.9.0+cpu/libtorch/lib/libc10.so)
frame #2: c10::IValue::reportToTensorTypeError() const + 0x64 (0x7f54b33ba0d4 in /home/Desktop/libtorch-cxx11-abi-shared-with-deps-1.9.0+cpu/libtorch/lib/libtorch_cpu.so)
frame #3: c10::IValue::toTensor() && + 0x47 (0x5574250fa0a1 in ./example-app-FAN)
frame #4: main + 0x20d (0x5574250f7c52 in ./example-app-FAN)
frame #5: __libc_start_main + 0xe7 (0x7f54b1b13bf7 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: _start + 0x2a (0x5574250f77ea in ./example-app-FAN)

Aborted (core dumped)

It seems like pytorch documentations related to the above are really scarse.Please help in solving the issue.

Hi I soveld the same problem , you can go

  • Otherwise I also try to create a wrapper code around the my model . just use split、cat、squeeze、narrow function to make multiple inputs combined into one

like this

    class model():
    ...
    def forward(self,inputs):
        image, points = torch.split(inputs, 1)
        points = points.squeeze(1).narrow(1, 0, 1).squeeze(1)

  • inference model
    x0=torch.ones(1, 3, 28, 28).to(device)
    x1=torch.ones(1, 28, 28).to(device)
    x1 = x1.unsqueeze(1).expand(1, 3, 2, 2)
    inputs = torch.cat([x0, x1])
    
    model(inputs)