Hi,
You’re partly correct that a lot of things are in Python, but there are also many things you can do in C++. PyTorch is based on a Tensor library called ATen
, on which almost all operators you use in Python are based. So most of the tensor.x
and torch.x
functions like relu
, addmm
, matmul
etc. are available in C++. See here https://github.com/pytorch/pytorch/tree/master/aten for documentation on this. The automatic differentiation engine is also written in C++, so you can differentiate variables just like in PyTorch (e.g. var.backward()
will work).
Now, a lot of the convenience things we have in PyTorch, like high level “layers” (like Linear, Conv etc.), optimizers, modules etc. are not included in ATen. However, we are actively working on an official C++ API that will provide all of those things. It is being developed on master under torch/csrc/api
. It’s theoretically already totally usable, but it is being overhauled on a daily basis, so it will take about one more month until I would call it stable enough to use. You can look at e.g. test/cpp/api/integration.cpp
for examples of training models with it.
This C++ API we are developing is based on an unofficial C++ interface to PyTorch called autogradpp: https://github.com/ebetica/autogradpp/ It is used by some research teams and provides layers, optimizers, modules and other things that might be useful for you. It is not going to be maintained going forward, since we are releasing an official API soon, but it’s also more stable at the moment.
So, right now, the situation is still a bit tricky. There’s nothing just yet that provides a convenient C++ API, but it is coming soon. I would suggest looking at Autogradpp or plain ATen for now.
Hope that helps,
Peter