How to give input as 'Tensor, Tuple[Tensor, Tensor]'

Dear all,

I’m new to libtorch and I’m loading an LSTM model and doing forward evaluation. The model’s input is Tensor, Tuple[Tensor, Tsneor] since the following Python code works:

import torch
import matplotlib.pyplot as plt
from torchinfo import summary

actuator_net_file = "resources/actuator_nets/"
actuator_network = torch.jit.load(actuator_net_file)

num_envs = 1
num_actions = 1
sea_input = torch.zeros(num_envs*num_actions, 1, 2, requires_grad=False)
sea_hidden_state = torch.zeros(2, num_envs*num_actions, 8, requires_grad=False)
sea_cell_state = torch.zeros(2, num_envs*num_actions, 8, requires_grad=False)

hc0 = (sea_hidden_state, sea_cell_state)
torques, (sea_hidden_state[:], sea_cell_state[:]) = actuator_network(sea_input, hc0)

I’m looking for the equivalence of torques, (sea_hidden_state[:], sea_cell_state[:]) = actuator_network(sea_input, hc0) in C++. My code is as following:

#include <torch/script.h> // One-stop header.
#include <iostream>
#include <memory>
#include <vector>

int main(int argc, const char* argv[]) {
  std::string actuator_net_file = "/home/fenglongsong/Desktop/example-app/";
  torch::jit::script::Module actuator_network;
  try {
    actuator_network = torch::jit::load(actuator_net_file);
  catch (const c10::Error& e) {
    std::cerr << "error loading the model\n";
    return -1;

  std::cout << "load model ok\n";

  const int num_envs = 1;
  const int num_actions = 1;
  auto u0 = torch::zeros({num_envs*num_actions, 1, 2});
  auto h0 = torch::zeros({2, num_envs*num_actions, 8});
  auto c0 = torch::zeros({2, num_envs*num_actions, 8});

  std::vector<torch::jit::IValue> inputs;
  std::vector<torch::jit::IValue> hc;


Compiling is no problem, but when running the executable I got the following error:

fenglongsong@alvaro-rsl ~/Desktop/example-app/build $ ./example-app .
load model ok
terminate called after throwing an instance of 'c10::Error'
  what():  Expected Tensor but got Tuple
Exception raised from reportToTensorTypeError at ../aten/src/ATen/core/ivalue.cpp:908 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7f658d1817ab in /home/fenglongsong/Documents/ocs2_ws/src/libtorch/lib/
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xce (0x7f658d17d15e in /home/fenglongsong/Documents/ocs2_ws/src/libtorch/lib/
frame #2: c10::IValue::reportToTensorTypeError() const + 0x64 (0x7f657712e304 in /home/fenglongsong/Documents/ocs2_ws/src/libtorch/lib/
frame #3: c10::IValue::toTensor() && + 0x4b (0x55d0c7d7701d in ./example-app)
frame #4: main + 0x452 (0x55d0c7d73f50 in ./example-app)
frame #5: __libc_start_main + 0xf3 (0x7f6575b87083 in /lib/x86_64-linux-gnu/
frame #6: _start + 0x2e (0x55d0c7d737ce in ./example-app)

Aborted (core dumped)

Although the error message says “Expected Tensor but got Tuple”, but from the python script it’s clear the inputs should be “Tensor, Tuple[Tensor, Tensor]”. I wonder how to fix this. Any suggestion will be much appreciated!

When you say inputs I don’t understand, you mean the values returned by the network? So your model is returning the tuple ( torques, (sea_hidden_state[:], sea_cell_state[:])) ? C++ can’t directly return multiple values in the same way as python, it will return a tuple that you need to unpack with a structured binding (assuming you are using C++17).