I require a powerful tensor library in c++ and was looking for a c++ NumPy equivalent and decided to use the ATen library. I downloaded the PyTorch source code from Github and I just wanted to clarify that if I simply copy the ATen folder into my project working directory can I get all the functionalities of creating and manipulating tensors the way we get in libtorch ?
If you want just avoid the C++ API for NNs to be included, there is the NO_API cmake flag and there is (but now you’re in the danger zone) INTERN_DISABLE_AUTOGRAD, but I must admit I haven’t experimented with it.
If you want to remove more, you might want to dig into the built of PyTorch (personally, I find this more daunting than the C++ code, but this could be me). The little-known and perhaps not completely intuitive detail is that the libtorch build is currently (so if you read this in late 2021 or 2022 it might have changed) defined in the caffe2/CMakeList.txt. More specifically, this is the place where the Torch bits (C++ API, Autograd, JIT, …) get added to libtorch.so:
For a quick test, you could see to changing that, but it probably is a good idea to not call the result libtorch if you plan on doing things with it.
Whether or not I would actually recommend to do such a thing is on another page, you’re clearly in “use at your own peril”-land here.
Thanks a lot for your detailed replies @tom and @albanD. I remember digging into the source code in the past and I will admit that the PyTorch source code structure is really complex and I was not able to extract the parts I wanted . I essentially just wanted the ways to create the tensors and the tensor accessor parts and I would probably go ahead with the CMake solution and let you know how that turns out. Even though outdated, Since I only want the ways to create and access tensors, the link shared by @albanD might work out too. Thanks a lot for your answers and for helping me out @ptrblck, @tom@albanD.
Note that if you want autograd, you will need to have full libtorch.
Also the current public API from libtorch and all the Tensor ops are actually the “autograd version” of these functions. So if you strip out autograd, you might have to use a different API.