C++ using JIT but we have four tensor return problem?

because my model coming from refinedet net
so the return have four tensor.
when I return the result to tensor ,I meeted some problem like follow:
auto input = torch::randn({1,3,512,512});
auto flag = torch::tensor(true);
auto output = model->forward({input,flag}).toTensor();

terminate called after throwing an instance of ‘c10::Error’
what(): isTensor() ASSERT FAILED at /home/qlt/LibTorch/include/ATen/core/ivalue.h:182, please report a bug to PyTorch. (toTensor at /home/qlt/LibTorch/include/ATen/core/ivalue.h:182)
frame #0: std::function<std::string ()>::operator()() const + 0x11 (0x7f06fd782fe1 in /home/qlt/LibTorch/lib/libc10.so)
frame #1: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x2a (0x7f06fd782dfa in /home/qlt/LibTorch/lib/libc10.so)
frame #2: c10::IValue::toTensor() && + 0xa8 (0x443c2a in ./build/launch)
frame #3: main + 0x452 (0x440788 in ./build/launch)
frame #4: __libc_start_main + 0xf0 (0x7f06fb5e9830 in /lib/x86_64-linux-gnu/libc.so.6)
frame #5: _start + 0x29 (0x43ee09 in ./build/launch)
how can i deal with this more tensor return question?

@smth told me yesterday, that there have been some fixes which recently went into master. Using the master from github fixed this issue for me (the latest nightly would probably also work).

OK,I Find a method to deal with this problem.
fisrt convert Ivalue to tuple
and get element can deal with this problem.
auto tpl = output.toTuple();
auto arm_loc = tpl->elements()[0].toTensor();
auto arm_conf = tpl->elements()[1].toTensor();
auto odm_loc = tpl->elements()[2].toTensor();
auto odm_conf = tpl->elements()[3].toTensor();

1 Like

@lingtengQIu if you update to latest Preview build, we have fixed this bug.