Deduce model input c++ data structure

Hello, I have a .pt that represent a serialized trained model.
I am trying to do inference with it in c++ (loading and then inference using a random input).
How can I deduce what is the correct format of the input?
Which c++ data structure should I use?
Which size are correct for the tensors?

The compiler doesn’t give me useful hints since it just show up error like this:

C:\Users\me\AppData\Local\conda\conda\envs\laneChangePy38\lib\runpy.py(87): _run_code
C:\Users\me\AppData\Local\conda\conda\envs\laneChangePy38\lib\runpy.py(194): _run_module_as_main
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 0 but got size 285 for tensor number 1 in the list

But for me it’s unclear where is the problem (what is tensor number 1 in the list?) , the code shown for debugging is really low level pytorch code (see below).

  File "code/__torch__/trajectory_model/___torch_mangle_272.py", line 19, in forward
    id_embedding = data["identifier"]
    batch_size = data["num_graphs"]
    _4 = (_backbone).forward(_3, _0, _2, _1, id_embedding, int(batch_size), valid_lens, )
          ~~~~~~~~~~~~~~~~~~ <--- HERE
    _5 = torch.slice(_4, 0, 0, 9223372036854775807)
    input = torch.select(_5, 1, 0)
  File "code/__torch__/layers/vectornet/___torch_mangle_261.py", line 20, in forward
    _subgraph = self._subgraph
    _0 = (_subgraph).forward(argument_2, argument_3, argument_4, argument_1, )
    input = torch.view(torch.cat([_0, id_embedding], 1), [argument_6, -1, 66])
                       ~~~~~~~~~ <--- HERE
    _1 = (_global_graph).forward(input, valid_lens, )
    return _1