Survey: What are you using the C++ API for?

We are working on plans to improve the PyTorch C++ API, and we’d like to learn how the C++ API is serving your use case, and what we can do to make the experience better.

If you are interested in telling us your use case, we’d love to learn about the following:

  1. What are you using the PyTorch C++ API for?
  2. How does your software stack look like? And how does the PyTorch C++ API interact with other components in your stack?
  3. Are you using JIT (TorchScript) models or torch::nn models?
  4. Are you training models in C++?
  5. How do you think the PyTorch C++ API (or the ecosystem in general) should be improved?

We appreciate any feedback you’d like to give us. Thanks a lot for your support!

2 Likes

Hi, Thanks for initiative for the survey.

  1. I am planning to call the libtorch from opensource numerical software - Scilab so that it could be used together with other modules such as image processing module to make the design and prototype much easier.
  2. Software stack looks like: I will have C++ gateways to call the libtorch, compiled and link so that it becomes a native function in Scilab. In user end, they will just call sth like : trained_model = torch_train(data, target, model_arch, configs) .
  3. Currently I am exploring both
  4. I prefer to, but the training would be done without recompiling the C++ codes.
  5. Could do what PyTorch (with Python) could, which make it could be totally independent from python.

Thanks.

Regards,
Chin Luh

1 Like
  1. What are you using the PyTorch C++ API for?
    == R&D neuroevolution for Computer Vision

  2. How does your software stack look like?
    == Win, Lin, Mac + Intel Fortran/C++ Compilers + VS2017 + CUDA SDK + in-house soft

And how does the PyTorch C++ API interact with other components in your stack?
== badly,… trying to understand to do organize the stack better, be happy to be in touch with FAI team for exchanging experience on it

  1. Are you using JIT (TorchScript) models or torch::nn models?
    == torch::nn only

  2. Are you training models in C++?
    == yes, that is why I’m using PyTorch instead of Tensorflow (no full C++ API yet)

  3. How do you think the PyTorch C++ API (or the ecosystem in general) should be improved?
    == many ways :slightly_smiling_face:

  • clear instructions for compilation from the source in static and dynamic modes as I asked in https://github.com/pytorch/pytorch/issues/25699
  • fixing the bugs described in https://github.com/pytorch/pytorch/issues/25698
  • clear quick start documentation for installation
  • benchmark white paper to show the advantages (vs Tensorflow, vs python version,…)
  • support of Intel compilers
  • existing ready examples (a typical) projects for windows (vs2017 solution), linux (code+makefile), macos (xcode solution)
  • introductory video, explaining all this in 5 mins
3 Likes

I would like to use the C++ api for inference. Though that’s fine on heavy duty machines, it’s very hard to do on low power, low resource, even embedded devices. I had trouble compiling libtorch on a raspberry pi 3. It kept on running out of memory and eventually crashed. It would be great if we could simplify a CPU-only build that produced a shared library and a monolithic static library (that contains all its dependencies). Ideally, it would be great if we could get it to a stage where we could build libtorch for Android, raspberry pi zero and even more embedded devices. Currently I train with the pytorch api, convert to onnx and use another lightweight framework for inference. There’s no real problem with that. But if you’re looking for ideas, I would pitch “pytorch-lite” or “pytorch-embedded”. In that light, maybe an OpenCL or Vulkan backend would be a good idea. By the way Tencent and Alibaba are using Vulkan now.

1 Like

Build from source must be fixed and stable. Get CI going and have variations on the build. In the windows world this means debug,release and release with symbols builds in /mt and /md linkings for a few CUDA versions as the tested ones. I should be able to install the msi and 5 min after installing CUDA build a sample.

I have been trying for weeks to get any c++ support claiming DL framework to work cross platform. I need c++ and I want to both train and infer. libTorch at least has 1 prebuilt but they include CUDA and are statically linked so are pinned to a particular version and have complete folders debug vs release with same names.
Documention never in sync with actual.
I have tried many ways and followed the build system and it just never works. Today’s checkout is hard coded somewhere to NINJA, completely ignores the setting of the Generator. Very frustrating.

1. What are you using the PyTorch C++ API for?
I’m writing an interface for the q/kdb interpreter to/from pytorch
There is already an interface to python, but I’m aiming to use only a shared library built on top of libtorch.

2. How does your software stack look like? And how does the PyTorch C++ API interact with other components in your stack?
q interpreter (executable size is around 655k) + ktorch.so (around 4mb) + libraries from libtorch
interaction is through a c-api from the interpreter.

3. Are you using JIT (TorchScript) models or torch::nn models?
No plans for using jit/torchscript

4. Are you training models in C++?
Yes, the model is phrased in the k/q interpreter, along with the optimizer & loss function.
(these are all pointers back and forth from interpreter <—> c-api <—> torch,torch::nn objects

5. How do you think the PyTorch C++ API (or the ecosystem in general) should be improved?

It was tricky to sort out the best way to deal with c++ smart pointers through a c-api interface.
It would be useful if there were some guidelines on how to use libtorch via a c-api.

I ended up writing all the modules that are available in python in c++ so that I could use
them with torch::nn::Sequential, all the various pool & padding layers, non-linear activations, etc.
Most of these could be implemented with torch::nn::Functional, but I couldn’t figure out how to
query a Functional layer to get the various options/control parameters back from a realized Sequential model in memory.
(the interface is written so that any module/loss fn/optimizer in cpu/gpu memory can be retrieved in database tables)

I also built Loss objects to match those available in python in torch.nn.
I haven’t been able how to make much use of the DataLoader/Dataset c++ setup yet,
but that may be because my outer data loop starts outside of c++ in the q/kdb interpreter.

Thank you to goldsborough, yf225 and everybody for the c++ set up so far…

1 Like

1. What are you using the PyTorch C++ API for?
Training and generation. We are using C++ because we need to “obfuscate” the code.

*2. How does your software stack look like? And how does the PyTorch C++ API interact with other components in your stack?
Our app/lib is currently available on Linux only using MKL + CUDA + FAISS + RDKit + some other libs. Everything is C++ for the lib, everything is javascript for the client who is using a native NodeJS addon (NAPI used to interact with our lib).

*3. Are you using JIT (TorchScript) models or torch::nn models?
Only torch::nn

*4. Are you training models in C++?
Yes

*5. How do you think the PyTorch C++ API (or the ecosystem in general) should be improved?
Resolve: Static linking master bug #21737

C++17 : I am stuck with <experimental/ >, I cannot use futures and the likes, constexpr if,… old style lambdas, old style template, …

Thread pools : c10 comes with a ThreadPool but there is no way to wait for job OOB. I think libtorch needs a solid single ThreadPool as creating a threadpool in every module create too many threads.

Documentation : At the very beginning because libtorch was not available for CUDA 10, I spent a lot of time trying to understand how to compile a compact lib for my platform and my needs without using python: “what USE_THISFLAG is for ? for what purpose? What happen if I set it ON or OFF?” For devs like me who are not at all datascientists, it should be great to introduce the packages you are using (xxblas, xxpack, …).
I am also spending time in converting python <=> C++ (simple example what is torch::mm in python ? what is t.mean() in C++,…)

Thank you

  1. What are you using the PyTorch C++ API for?
    On the fly generation of neural networks for data stream analysis. Python’s GIL stands in the way of efficiently using multiple threads so C++ was the better option.
  2. How does your software stack look like? And how does the PyTorch C++ API interact with other components in your stack?
    Linux - C/C++ - PyTorch
  3. Are you using JIT (TorchScript) models or torch::nn models?
    torch::nn only
  4. Are you training models in C++?
    Yes
  5. How do you think the PyTorch C++ API (or the ecosystem in general) should be improved?
  • Better documentation
  • Better support for compilation from source
  1. What are you using the PyTorch C++ API for?
    inference for program on windows platform
  2. How does your software stack look like? And how does the PyTorch C++ API interact with other components in your stack?
    Linux - C/C++ - PyTorch
  3. Are you using JIT (TorchScript) models or torch::nn models?
    torch::nn only
  4. Are you training models in C++?
    No
  5. How do you think the PyTorch C++ API (or the ecosystem in general) should be improved?
    DO NOT CHANGE THE API FREQUENTLY