Terminate called after throwing an instance of 'c10::Error' what(): isTensor() for LSTM

[ Variable[CPUType]{12,64} ]
terminate called after throwing an instance of ‘c10::Error’
what(): isTensor() ASSERT FAILED at /export/users/long/gits/Pytorch/libtorch-1.0.1/libtorch/include/ATen/core/ivalue.h:205, please report a bug to PyTorch. (toTensor at /export/users/ong/gits/Pytorch/libtorch-1.0.1/libtorch/include/ATen/core/ivalue.h:205)
frame #0: std::function<std::string ()>::operator()() const + 0x11 (0x7fe9e2979441 in /export/users/long/gits/Pytorch/libtorch-1.0.1/libtorch/lib/libc10.so)
frame #1: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x2a (0x7fe9e2978d7a in /export/users/long/gits/Pytorch/libtorch-1.0.1/libtorch/lib/libc10.so)
frame #2: c10::IValue::toTensor() && + 0xa6 (0x4489de in ./dcgan)
frame #3: main + 0x526 (0x4455a0 in ./dcgan)
frame #4: __libc_start_main + 0xf5 (0x7fe9e1508c05 in /usr/lib64/libc.so.6)
frame #5: ./dcgan() [0x443f9c]

Python input
model = torch.load("./model.12.1L.pt", map_location=lambda storage, loc: storage)
hidden = model.init_hidden(args.batch_size)
examples = torch.ones(12, 64).type(torch.LongTensor)
hidden=(torch.zeros(1, 64, 200), torch.zeros(1, 64, 200))
output, hidden = model(examples.to(“cpu”), hidden)

C++ input
torch::Tensor indata1 = torch::ones({12,64}, torch::kLong);
torch::Tensor h0 = torch::from_blob(std::vector(1* 64 * 200, 0.0).data(), {1, 64, 200});
torch::Tensor c0 = torch::from_blob(std::vector(1* 64 * 200, 0.0).data(), {1, 64, 200});
torch::jit::IValue tuple = torch::ivalue::Tuple::create({h0, c0});
at::Tensor output = module->forward({indata1, tuple}).toTensor();

Python input code can output right results, when I transfer to libtorch the problem occurs. anyone kowns how to solve ? Thanks.

@ptrblck hi,could you help me?


output, hidden = model(examples.to(“cpu”), hidden)


at::Tensor output = module->forward({indata1, tuple}).toTensor();

you would not expect the .toTensor() on the output to succeed, but you probably have a tuple that you need to unpack. This is why isTensor fails.

Best regards


P.S.: While you might have preferred ptrblck to answer, it’s generally preferred to not tag people to not discourage anyone who wants to contribute from chiming in.

Hi,Thomas. Thanks for your time. I‘ve fixed the problems with following your proposal.