Using ATen as a stand-alone tensor library in C++

Hello all,
I require a powerful tensor library in c++ and was looking for a c++ NumPy equivalent and decided to use the ATen library. I downloaded the PyTorch source code from Github and I just wanted to clarify that if I simply copy the ATen folder into my project working directory can I get all the functionalities of creating and manipulating tensors the way we get in libtorch ?

TIA

You would still have to build libtorch to be able to use it and you could use the pre-built package to include it in your project, if possible.

Thank you for replying @ptrblck. I essentially wanted just the tensor functionality and not the other neural network modules. Can I just build the Aten and the C10 package for my use case ?

I don’t know if you can build the ATen lib (i.e. only (some) tensor functions) in a standalone way, but maybe @albanD or @tom might know it (I doubt you can, but am just speculating).

From the PyTorch repo itself, we don’t have any tool to do that no.
And the old version of the lib from GitHub - zdevito/ATen: ATen: A TENsor library for C++11 is outdated and does not contain c10.

That being said, nothing prevents you from doing it. But you will have to write the build files yourself :confused:

2 Likes

To add to Alban’s comments:

If you want just avoid the C++ API for NNs to be included, there is the NO_API cmake flag and there is (but now you’re in the danger zone) INTERN_DISABLE_AUTOGRAD, but I must admit I haven’t experimented with it.

If you want to remove more, you might want to dig into the built of PyTorch (personally, I find this more daunting than the C++ code, but this could be me). The little-known and perhaps not completely intuitive detail is that the libtorch build is currently (so if you read this in late 2021 or 2022 it might have changed) defined in the caffe2/CMakeList.txt. More specifically, this is the place where the Torch bits (C++ API, Autograd, JIT, …) get added to libtorch.so:

For a quick test, you could see to changing that, but it probably is a good idea to not call the result libtorch if you plan on doing things with it.

Whether or not I would actually recommend to do such a thing is on another page, you’re clearly in “use at your own peril”-land here.

Best regards

Thomas

4 Likes

Thanks a lot for your detailed replies @tom and @albanD. I remember digging into the source code in the past and I will admit that the PyTorch source code structure is really complex and I was not able to extract the parts I wanted :sweat_smile:. I essentially just wanted the ways to create the tensors and the tensor accessor parts and I would probably go ahead with the CMake solution and let you know how that turns out. Even though outdated, Since I only want the ways to create and access tensors, the link shared by @albanD might work out too. Thanks a lot for your answers and for helping me out @ptrblck, @tom @albanD.

Hi,

Note that if you want autograd, you will need to have full libtorch.
Also the current public API from libtorch and all the Tensor ops are actually the “autograd version” of these functions. So if you strip out autograd, you might have to use a different API.