Errors are reported when I tried to save model for C++ libtorch inference

Hi everyone,
I have some issue in trying to save pytorch model for C++ libtorch inference by torch.jit.sctipt().
First, the reported errors looks like

Module ‘ConvModule’ has no attribute ‘padding_layer’

And I found it was casued by uninitialized model variables. And those variables are initialized in its’ init() func. So I added @torch.jit.export key word for import init() to C++ model explicitly, and this issue seems fixed. Then it comes with another issue:

Python type cannot be used as a value:
File “/xxxx/”, line 87
order=(‘conv’, ‘norm’, ‘act’)):
super(ConvModule, self).init()
~~~~~~~~~~ <— HERE

I have no idea how to solve such issue. I did not find any useful information online. Am I doing this right at first step? I’m sorry for not familiar with pytorch or torch.jit, so I just simply apply those solutions I found.
I just know the torch.jit.sctipt would save model for C++ inference statically, so it cannot handle some if clause if the varibale type was unclear. Would it be the root cause which led to so many issues?

Please, any information would be appreciated, or just let me know if more info is necessary. Thank you.

Could you try to remove the input arguments to super() (it’s not needed in Python 3.x anymore) and call super().__init__()?