The pytorch blog "A Tour of PyTorch Internals" is out-of-date. How to know more about the pytorch internal

I am start to read the source code of pytorch recently. But I am totally lost. In the tutorial, Part one introduces the “Tensor type”, but “csrc/generic/Tensor.cpp” disappeared. I only found “Tensor.h” in Aten library and in pytorch/torch/lib/THD/master_worker/worker/dispatch/ (But the later folder is claimed “dead” now)

I have read short introductions to the internal of pytorch in the document and tutorial. Now I only know that ATen is the tensor library of pytorch (The source code of this is easier to read since they have clear meanings). pytorch/crsc is the integration part of c++ and python.

I also have tried to find some hint from the GitHub and read the pull record. But it is quite hard.

I am familiar with the basic grammar of C++ but am bad at organization of the code. My own past C++ code only includes several xxx.h and xxx.c. So I am so confused when facing such a big project. Can anyone give me some suggestions to read the source code?

Thanks!

4 Likes

Sorry for the post being out of date. Currently I don’t think there is a good guide on this topic especially since that we are in the middle of a big refactoring. We may have something once PyTorch 1.0 is out. May I ask why you want to read the source code?

3 Likes

Hi, I’m also interested in reading the source code of PyTorch. As a C++/CUDA newbie, I also found it’s hard to start digging into the code.

The reasons are mainly because I want to understand deeper about PyTorch(and Python) and become a better user of it. I’m having a hard time figuring out where the variables will be in(CPU/GPU, which GPU etc.) and how the variables are passed(reference or copy) in memory when I’m writing data augmentation code on GPU. I’m also interested in the way how PyTorch interacts with Python and C++/CUDA code. And since it’s such a great library, maybe I could become a contributor when I have a better understanding of the internals.

It would be nice if there’s a developer getting started guide available.

Thanks for writing this amazing library:grinning:

2 Likes

Thanks for explaining!

For a very quick and high-level description:

  1. Tensor operations are implemented in the C++ Tensor library called ATen, which also includes some old code from Torch time (we are rewriting those for PyTorch 1.0!). It’s important to note that ATen doesn’t have autograd functionality (at least not currently) and only provides a Tensor API and operators on Tensors of various types, e.g., one of them being CPUFloatTenor.

    Since there are many different Tensor types and different operators, a lot of codegen is applied here, which unfortunately makes the code harder to read for new contributors. But we are constantly improving upon it!

  2. To connect ATen with Python, we have a lot of C++ code living under torch/csrc, implementing things like autograd, jit, dtype device layout objects, etc. In particular, for autograd, we subclass each of the ATen tensor types with something called VariableType that implements functionalities to track computation graph (i.e., history), which the autograd engine will use to back-prop through and compute gradients.

    Similar to ATen, we also use some codegen to generate the operator interface on VariableTypes, and more importantly generate the backward pass for each operator. The mapping between forward and backward functions are listed in a yaml file, with many backward functions implemented in Functions.cpp. Note that all operations in Functions.cpp are operating on Variables, i.e., Tensors with VariableType, which means that if a function, e.g. my_op_backward, is implemented using existing operations that support autograd, the double backward also works out of the box! (Of course one may also implement a custom double backward for efficiency reasons.)

  3. Finally, the connection between C++ land (i.e., torch/csrc) and Python is mostly implemented with Python C API and pybind11.

22 Likes

Oh I just realized that you are not OP LOL. But hope my reply helps!

1 Like

I have made available a slide deck with some details about PyTorch internals in case you’re interested.

There are also two other articles I wrote about it in case you’re interested:

Hope it helps !

13 Likes

The slide deck says “currently unavailable”. Can you check if it’s a problem with the slides or with the website?

It was something temporary with the website probably, because I can now access it without problems.

1 Like

“A tour of PyTorch internals” is a great article!

Article in short:

  1. We get a generic C++ tensor and customize it for a particular type of tensor with split_types function which is available here https://github.com/pytorch/pytorch/blob/cca247635c6edb323176eeac7a18d3e9ab71c558/tools/setup_helpers/split_types.py The tutorial mention that it’s called from setup.py I have not found it there anymore.
  2. The same logic for C++ headers.
  3. Then there is a cwrap that creates C++ methods for particular objects. It uses YAML file to do that. cwrap reads YAML and generate the code for methods.
  4. Tutorial mentions CPython but on Github I found pybind11 to handle interactions of Python and C++.

Could FB make "A Tour of PyTorch Internals (Part III)? We really need updated version. And split_types and cwrap make life really complicated. I don’t know about others but for that I would definitely wait even 30 minutes for PyTorch to build to finally know what happens.

UPDATE: Well it’s already an hour since I build PyTorch from the source.

In case someone is looking for a new blog post on this, here’s one from @ezyang

PyTorch Internals for potential contributors

6 Likes