Errors when use traced model to predict

Hi everyone, I am a new to Torch Script.
I have traced my model When I loading and predicting result based on my model had some errors.
Anyone can have me figure it out?

Input:

    at::Tensor f_f = torch::tensor({269,  90,  32, 269,  65,  85,  17, 269, 104,  13,   4,  21,  13, 269, 15,  95,   5, 269,  41,  30,  21,  29, 270, 270},+);

    at::Tensor f_p = torch::tensor({3,  7, 13, 17, 22, 23},torch::kFloat32);

    at::Tensor b_f = torch::tensor({270, 270,  29,  21,  30,  41, 269,   5,  95,  15, 269,  13,  21,   4, 13, 104, 269,  17,  85,  65, 269,  32,  90, 269},torch::kFloat32);

    at::Tensor b_p = torch::tensor({23, 20, 16, 10,  6,  1},torch::kFloat32);

    at::Tensor w_f = torch::tensor({1020, 1083, 4027, 3087,  262, 8765},torch::kFloat32);

    f_f = at::reshape(f_f , {24, 1});
    f_p = at::reshape(f_p , {6, 1});
    b_f = at::reshape(b_f , {24, 1});
    b_p = at::reshape(b_p , {6, 1});
    w_f = at::reshape(w_f , {6, 1});

    inputs.push_back(f_f);
    inputs.push_back(f_p);
    inputs.push_back(b_f);
    inputs.push_back(b_p);
    inputs.push_back(w_f);

    at::Tensor output = module.forward(inputs).toTensor();
Error:
terminate called after throwing an instance of 'std::runtime_error'
  what():  Expected tensor for argument #1 'indices' to have scalar type Long; but got CPUFloatType instead (while checking arguments for embedding)
The above operation failed in interpreter, with the following stack trace:
at code/__torch__/torch/nn/modules/module.py:8:12
op_version_set = 1
class Module(Module):
  __parameters__ = ["weight", ]
  training : bool
  weight : Tensor
  def forward(self: __torch__.torch.nn.modules.module.Module,
    forw_sentence: Tensor) -> Tensor:
    input = torch.embedding(self.weight, forw_sentence, -1, False, False)
            ~~~~~~~~~~~~~~~ <--- HERE
    return input
  def forward1(self: __torch__.torch.nn.modules.module.Module,
    tensor: Tensor) -> Tensor:
    input = torch.embedding(self.weight, tensor, -1, False, False)
    return input
Compiled from code /home/bao/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py(1484): embedding
/home/bao/anaconda3/lib/python3.7/site-packages/torch/nn/modules/sparse.py(114): forward
/home/bao/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py(516): _slow_forward
/home/bao/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py(530): __call__
/home/bao/Desktop/segment_vtcc_test/lm_lstm_crf/model/lm_lstm_crf.py(222): forward
/home/bao/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py(516): _slow_forward
/home/bao/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py(530): __call__
/home/bao/anaconda3/lib/python3.7/site-packages/torch/jit/__init__.py(1034): trace_module
/home/bao/anaconda3/lib/python3.7/site-packages/torch/jit/__init__.py(882): trace
/home/bao/Desktop/segment_vtcc_test/convert_model_1.py(132): <module>

Thank in advance.

Hey, are you able to post a repro of how to produce your traced model? Either a link to a .pt file or (even better) the code you use to trace and save the model.

From what I can tell the input to this module forw_sentence is being used as the indices for the embedding operation (which, as in the error, must be of type Long instead of Float), but it’s hard to tell what the actual error is without the full repro.