Pytorch model deployed within signal processing block written in C

Hi all:

I would like to deploy my PyTorch model (written in usual “research friendly eager mode” with Python API) within a signal processing framework (written in C code) and do inference.

What are the steps I should take to make it happen?

Basically what I want is:
Signal Processing Block (C code) --> Pytorch Model (takes in signal and outputs signal) --> Signal Processing Block (C code).

Is C a strong requirement or is cpp ok?
If you can do cpp, then you can use libtorch and the cpp api to run a jit model without python. This doc should be a good introduction about this.

Unfortunately, the PyTorch model has to be integrated within C code.

Thanks for pointing to the doc.

In that case, I guess you can always run a python interpreter in your C code and run the python model directly in that interpreter :slight_smile: