Is there torch::compile in C++?

python version of torch.compile is cool! Is there C++ version of compile?

PyTorch does not have a direct C++ equivalent for torch.jit.compile. The torch.jit namespace in Python is used for PyTorch’s JIT compiler, which takes Python code and compiles it to TorchScript, a statically typed subset of Python that can be optimized and run independently of Python.

For C++, PyTorch provides a TorchScript compiler which can be used to run TorchScript code. This means you can write your model in Python, use torch.jit.script or torch.jit.trace to convert it to TorchScript, save it to a file, and then load that file in C++ and run it.

Here’s a very simplified example:

In Python:

Copy code
import torch
class MyModule(torch.nn.Module):
    def forward(self, x):
        return x * 2

my_module = MyModule()

# Convert to TorchScript via scripting
scripted_module = torch.jit.script(my_module)
# Save to a file"")

Then, in C++:

Copy code
#include <torch/script.h>

int main() {
    // Load the TorchScript module
    torch::jit::script::Module module = torch::jit::load("");
    // Use it just like a regular torch::nn::Module
    auto tensor = torch::randn({1});
    std::vector<torch::jit::IValue> inputs;
    at::Tensor output = module.forward(inputs).toTensor();
    std::cout << output << std::endl;
    return 0;

This enables PyTorch models to be run in a C++ environment. However, please note that the C++ API does not currently support all features of PyTorch, and is intended to be used for deployment of models, rather than for model development.


pre 2.0, the way to go was to develop in Python, then use TorchScript, save the model, and load it to the C++ Libtorch environment for deployment.

Does that workflow still exist with the new torch.compile (torch.compile Tutorial — PyTorch Tutorials 2.0.1+cu117 documentation) functionality?

Because I assume the model generated by torch.compile is more optimized (faster, less memory) than a model generated by TorchScript.

Any input?