How can I deploy a pytorch model in a c++ project

How can I deploy a pytorch model in a c++ project, if I don’t mind running python from c++ in production?

I don’t want to convert the pytorch trained model to others because I have a lot of custom operations that are written in c++. Porting those custom operations to other frameworks is not easy.

1 Like

I think you’ll need to have the python interpreter running from C++. This link could be of help https://docs.python.org/3/extending/embedding.html

Shameless plug for my project: https://github.com/bzcheeseman/pytorch-inference

Embedding the interpreter would also work, but I had trouble running network inferences that way in the past.