Survey: What are you using the C++ API for?

Hi Max. It is planned, but we are not there yet. We are also still scratching our heads to see if porting to ONNX.js may be less of a pain than compiling with emscripten.
I’ll let you know if we make any progress

Great, would be appreciated, I’ll do the same.

FYI, I already tried ONNX.js and found it quite limited. It’s probably great if you have a simple dense or convolutional model but once you are using more involved modules from the torch library you’ll get stuck. I’m working with RNNs and even the vanilla RNN model is currently unsupported. Even more of a setback is that this library is on low priority right now so these operators probably won’t exist for a while to come.

WOW. Great Job, Pytorch team!! I love to participate in this survey.

  1. What are you using the PyTorch C++ API for?
    I use it to deploy my deep learning engine.I make simple GUI to interact with it. And the most important things is to be able to use multi-threading (c++) which i can not do in python(multi-threading).

  2. How does your software stack look like? And how does the PyTorch C++ API interact with other components in your stack?
    OpenCV + CUDA(used for opencv) + CuDNN(used for opencv) + LibTorch + CLR C++(GUI project) + Windows 10 + Visual Studio 2017
    Usually I write a class to wrap all the necessary variable from PyTorch C++ (converting, passing data to network, etc) and make it a dll library which will be called by the GUI apps. Then in the GUI project, I create an object from my defined class which will control Pytorch C++ API for me.

  3. Are you using JIT (TorchScript) models or torch::nn models?
    I am using torchscricpt. it is a great features to have.

  4. Are you training models in C++?
    I found that the accuracy performance is slightly dropped from the original python in some deep learning networks, so i still consider to write finetuned training program in c++ to keep the accuracy performance in the acceptable range.

  5. How do you think the PyTorch C++ API (or the ecosystem in general) should be improved?
    I think the C++ API is good enough for me (and of course it will be MORE GREAT if it already has all the functions and features that PyTorch Python has)
    Right now the most important for me is the documentation and usage examples. I remember spend 3 days only to look for slicing an tensor. But in the end, it is very satisfying to see my engine running in my C++ GUI.

Thank you very much, Pytorch! ^o^

  1. It’s ideal for integrating into C++ projects

  2. Currently it interacts with LLVM

  3. No

  4. Yes

  5. Methods like einsum should use std::vectors to store indices instead of parsing string input. There also should not be a 26 character limit with this approach.
    Examples of this can be seen in c++ projects such as eigen and Fastor.

  1. The C++ API is being used for a private project that requires as much control as possible over the entire training and data processing pipeline. Our essential reason for this project is to achieve a certain task with a more flexible and efficient framework than currently exists, and that will allow for rapid augmentations of existing deep learning algorithms.
  2. We will be feeding in data from many threads in the near future, and we are implementing it in a way that is compatible with distributing a workload across a cluster. Our project will provide a high level interface for solving a general type of ML problem and libtorch is hidden away from the eventual user interface by several layers of abstraction. Under the hood, our data processing pipeline will require significant customization of preprocessing functions and data format conversions and similar features.
  3. torch::nn models
  4. Yes
  5. Better documentation, more thorough examples, more composition functions.

To be more specific about composition functions, look at at::grid_sampler (Function at::grid_sampler — PyTorch master documentation)

This is the easiest interface for upsizing/downsizing/cropping cropping in general that I am aware of for libtorch, and it requires manually creating a grid of values ranging from -1 to 1. It would help if there was a namespace for higher level functions that implemented features such as resize_image_tensor, interpolate_image_tensor, interpolate_tensor, or something of the sort that took in an input, a desired output shape, and an interpolation mode flag.

Thank you for the work you’ve done on the C++ API!

5 II. It would also be great if RTTI was not used in the library by default. This forces the project as well as other projects to also enable RTTI.

  1. Fitting 3D volumetric data to a complex model
  2. PyTorch C++ to create a fast module which is imported into Python (ubuntu and gcc)
  3. torch::nn
  4. Not on this project, but on another I am implementing reinforcement learning.
  5. I’m not sure at the moment - it all seems pretty good so far.
  1. / 2.:
    I’m using libtorch for predicting values of a compressor map. Im working for a research project where the goal is to implement a model predictive control unit (MPC) for controlling a turbo line. In addition to the MPC, we want to run a neural network in parallel to help regulate the turbo strand. The neural network model should run within an RTOS (QNX). For the MPC we plan to use the library qpOASES and for the neural network libtorch. Unfortunately I was not able to cross compile libtorch from source for QNX as already mentioned here : #38310
    I would be very happy about any help in this regard.
    3.: no
    4.: yes
    5.: Better documentation / more Tutorials on the pytorch website. Better documentation on how to cross compile from source for other OS. Possibility to cross compile from source just using cmake without other dependencies like PyYaml.