Libtorch model inference

Hi, I am trying to load do inference in libtorch, but it gives me Segmentation fault (core dumped) after I pass the model predicts the outputs. I am using the example code and I am not sure why is it happening?

you can find the here: https://gist.github.com/sanketgujar/baf27851fb1b849ced78cf0668411dd0

Could you also provide your model definition code and how you export your model?

Model Defination code : https://gist.github.com/81dff17eb270594637b5b356591bf8c1.git

I converted the model to Torch Script via tracing as shown in the tutorial
https://pytorch.org/tutorials/advanced/cpp_export.html#step-4-executing-the-script-module-in-c

The debugger output:

Thread 1 “example-app” received signal SIGSEGV, Segmentation fault.
0x00007ffff608d9fe in ?? () from /usr/local/cuda-10.0/lib64/libcudart.so.10.0

I am still not sure what is the problem here. The model is doping the prediction but giving seg fault after.