Where does `torch._C` come from?

Thanks a lot for this demo, and this info about segfault is good to know!

However, as I repeatedly failed to properly the gdb_test.py when installing python from source, I guess this is why I can’t use gdb to do the same as you suggested.

The deepest reason for why I had persisted to install gdb is I want to understand the logic of bricks (such as conv2d, relu etc) of deep learning through coding. As pdb helps me a lot in understanding code logic in python level ( I use pdb to explore code logic in perfectly healthy codes, not buggy codes, by experiment and checking the values of internals of funcions and classes), I expect gdb would do the same good for me when trying to understand C/C++ codes as pytorch’s most activations, layers are written in C/C++.

Now, I don’t expect gdb would work on my Mac anytime soon, is there other easier way for me to experiment torch C/C++ codes (for example, experiment on ConvForward in THNN) to understand the calculation inside the classes and functions, like we can use pdb on pure python codes?

Although my gdb is not working on python code, but it seems working on C/C++. So if you can give me a very simple demo on how to experiment ConvForward or Threshold_updateOutput or else in C/C++, I can learn to start using gdb with C/C++ to experiment the Torch code.

Thanks a lot!

If you can’t jump inside gdb, the other alternative I see is to look at the source code and study it.
The core pytorch libraries are implemented inside the lib folder.

A quick summary, all tensor operations are defined in TH (for the CPU) and THC (for the GPU), and the neural network implementations are defined in THNN (CPU) and THCUNN (GPU).

Also, the following link might be helpful (even though it’s not 100% up-to-date): https://apaszke.github.io/torch-internals.html .
Fun fact about this document is that it consisted of the notes from Adam Paszke (one of the main devs of PyTorch) while he was studying Torch internals! :slight_smile:

2 Likes

Thank you so much for the links and suggestions, they are very helpful! I will study Apaszke’s blogpost more carefully and try out the demo codes this time, as it will make more sense to me now.

hello
can you help me to find the torch._C._nn.binary_cross_entropy .
I still can’t find it , and do are there some tips to find something

thank you very much!

Hi,

This function is generated automatically by ATen, it’s definition is here and it is calling directly into the c backend that is implemented in this file for the cpu and this one for the gpu.

2 Likes

Thank you very much.

I found that the name of function varise between PyTorch and c, which results in that I use “Ctrl + F” can’t find what want to find in c file.
So, are there some relationships of the different "names’ of the same function between pytorch and c ? i.e how do you find it in this step by step:sweat_smile::sweat_smile:

Besides, can I think that all scurce code of functions in pytorch can be fuoud?:relaxed:

In the nn C code semantic, forward corresponds to updateOutput, backward wrt the input is updateGradInput and backward wrt the parameters (if there is any) is accGradParameters. These naming are legacy from the old torch7 naming conventions.

I am sorry I don’t understand this question: can I think that all scurce code of functions in pytorch can be fuoud? :slight_smile:

1 Like

I am sorry. Can I find the original implement of all of the functions? e.g. you help me find the torch._C._nn.binary_cross_entropy in here , which I once thought may be not open to us.

Hi,

yes all the functions are in the repo :slight_smile:
Some are implemented in pure python, the other ones available in a similar way as binary_cross_entropy behind the ATen wrapper.

2 Likes

OK, thank you very much

can anyone help me to find this C function?
torch._C._nn.log_softmax(input, dim)
I need to see the function body

Hi,

This function is defined definition and it is implemented here implementation

2 Likes

There is a bug after pytorch 0.4, as discussed here. When I call python from matlab, I get

Traceback (most recent call last):
File “”, line 1, in
File “/home/cheng/Documents/tracking/HRL/track/track_single.py”, line 8, in
import torch
File “/home/cheng/anaconda2/envs/py36th0.4.1/lib/python3.6/site-packages/torch/init.py”, line 80, in
from torch._C import *
RuntimeError: stoi

And I think if I can find where the stoi function locates in the import progress, it will help to solve the problem. I searched for stoi in the pytorch github repo, found 14 results. Can anybody take a look for what maybe called in the import progress? Search results for stoi

hi, smth can you tell me from where torch._C.TensorBase comes?

I am trying to find how to replace updateGradInput form thnn backend now that it is apparently gone.
I am trying to have access to backward weights. Former versions of pytorch were happy with:

_backend = type2backend[input.type()]
_backend.SpatialConvolutionMM_updateGradInput(
                _backend.library_state,
                input,
                grad_output,
                grad_input,
                **weight_feedback**,
                finput,
                fgradInput,
                ksize[0], ksize[1],
                int(stride[0]), int(stride[1]),
                int(padding[0]), int(padding[1])
            )

but now in new Pytorch I can’t find how to replace these with torch._C functions. nn.ConvNd does not seem to provide access to backward weights.
I would appreciate any help in this regard.

I have been trying download torch and import it.I have disabled cuda in the source code.In the init.py
it’s trying import from torch._C import * and that’s raising an import error :libcudnn.so.8 cannot open shared object file or directory.
Is there a way to disable the use of _C or is it important while using torch.nn module.
How do I solve this import error.

Could you describe your use case and your changes a bit more, please?
Why and how to you want to “disable” CUDA in the source code?
If you don’t want to use a GPU for PyTorch you could just download the CPU-only binaries.

Iam working on a remote linux machine.I dont need cuda or cudnn so I disabled cuda dependency in _utils_internal.py file of torch and changed the variable USE_GLOBAL_DEPS = True to False and it worked fine but it’s giving me an error in import _C file while initializing the torch library.I
Also can you direct me to the page with CPU-only binaries?

You can select CPU in the install matrix, copy/paste the command into your terminal, and execute it to install the binaries. Manipulating the binaries might easily break them (unless I misunderstood your comment and you are trying to build from source).
Also, even if you have installed the PyTorch binaries with CUDA support you don’t need to use the CUDA backend and can just use the CPU kernels.

1 Like