Where does `torch._C` come from?

I am not aware of any IDE that would be able to do that. I think you will have to have two different debuggers.

Thanks, so two debugers (one for python, one for C/C++) for the same pytorch tutorlal in python. This is interesting.

The same thing happens in numpy if you want to debug C code there.

Thanks, could you give me a simple demo on how to debug C code in a numpy code example?

Well, you usually want to debug in C when you have segmentation faults, as you don’t have any information that was returned from C.

If you want to debug from python, you can use pdb.

Note: in most cases you don’t need to go into gdb to debug a python program, because the libraries were designed to give meaningful error messages and not segfault. In the numpy case that I will show below, I will use an unsafe numpy function to obtain the segfault, but that should not happen with normal numpy code. The same applies for pytorch, so if you observe segmentation faults, please let us know :slight_smile:

If you want to go into debugging C, you can use gdb for example. Say you have a python script example.py that gives a problem. Here I’ll post an example in numpy, called example.py:

import numpy as np
from numpy.lib.stride_tricks import as_strided

a = np.array([1, 2, 3])
b = as_strided(a, shape=(2,), strides=(20,))
# accessing invalid memory, will segfault
b[1] = 1

If you try running it in python, it will give segmentation fault. Try running python example.py.

To run it on gdb, you can do something like

gdb python

and then, once it enters gdb, you do

run example.py

This will run the program and, when it crashes, it will give you the possibility to inspect where it crashed by running bt (from backtrace). You can find more information on gdb options online.

3 Likes

Thanks a lot for this demo, and this info about segfault is good to know!

However, as I repeatedly failed to properly the gdb_test.py when installing python from source, I guess this is why I can’t use gdb to do the same as you suggested.

The deepest reason for why I had persisted to install gdb is I want to understand the logic of bricks (such as conv2d, relu etc) of deep learning through coding. As pdb helps me a lot in understanding code logic in python level ( I use pdb to explore code logic in perfectly healthy codes, not buggy codes, by experiment and checking the values of internals of funcions and classes), I expect gdb would do the same good for me when trying to understand C/C++ codes as pytorch’s most activations, layers are written in C/C++.

Now, I don’t expect gdb would work on my Mac anytime soon, is there other easier way for me to experiment torch C/C++ codes (for example, experiment on ConvForward in THNN) to understand the calculation inside the classes and functions, like we can use pdb on pure python codes?

Although my gdb is not working on python code, but it seems working on C/C++. So if you can give me a very simple demo on how to experiment ConvForward or Threshold_updateOutput or else in C/C++, I can learn to start using gdb with C/C++ to experiment the Torch code.

Thanks a lot!

If you can’t jump inside gdb, the other alternative I see is to look at the source code and study it.
The core pytorch libraries are implemented inside the lib folder.

A quick summary, all tensor operations are defined in TH (for the CPU) and THC (for the GPU), and the neural network implementations are defined in THNN (CPU) and THCUNN (GPU).

Also, the following link might be helpful (even though it’s not 100% up-to-date): https://apaszke.github.io/torch-internals.html .
Fun fact about this document is that it consisted of the notes from Adam Paszke (one of the main devs of PyTorch) while he was studying Torch internals! :slight_smile:

2 Likes

Thank you so much for the links and suggestions, they are very helpful! I will study Apaszke’s blogpost more carefully and try out the demo codes this time, as it will make more sense to me now.

hello
can you help me to find the torch._C._nn.binary_cross_entropy .
I still can’t find it , and do are there some tips to find something

thank you very much!

Hi,

This function is generated automatically by ATen, it’s definition is here and it is calling directly into the c backend that is implemented in this file for the cpu and this one for the gpu.

2 Likes

Thank you very much.

I found that the name of function varise between PyTorch and c, which results in that I use “Ctrl + F” can’t find what want to find in c file.
So, are there some relationships of the different "names’ of the same function between pytorch and c ? i.e how do you find it in this step by step:sweat_smile::sweat_smile:

Besides, can I think that all scurce code of functions in pytorch can be fuoud?:relaxed:

In the nn C code semantic, forward corresponds to updateOutput, backward wrt the input is updateGradInput and backward wrt the parameters (if there is any) is accGradParameters. These naming are legacy from the old torch7 naming conventions.

I am sorry I don’t understand this question: can I think that all scurce code of functions in pytorch can be fuoud? :slight_smile:

1 Like

I am sorry. Can I find the original implement of all of the functions? e.g. you help me find the torch._C._nn.binary_cross_entropy in here , which I once thought may be not open to us.

Hi,

yes all the functions are in the repo :slight_smile:
Some are implemented in pure python, the other ones available in a similar way as binary_cross_entropy behind the ATen wrapper.

2 Likes

OK, thank you very much

can anyone help me to find this C function?
torch._C._nn.log_softmax(input, dim)
I need to see the function body

Hi,

This function is defined definition and it is implemented here implementation

There is a bug after pytorch 0.4, as discussed here. When I call python from matlab, I get

Traceback (most recent call last):
File “”, line 1, in
File “/home/cheng/Documents/tracking/HRL/track/track_single.py”, line 8, in
import torch
File “/home/cheng/anaconda2/envs/py36th0.4.1/lib/python3.6/site-packages/torch/init.py”, line 80, in
from torch._C import *
RuntimeError: stoi

And I think if I can find where the stoi function locates in the import progress, it will help to solve the problem. I searched for stoi in the pytorch github repo, found 14 results. Can anybody take a look for what maybe called in the import progress? Search results for stoi

hi, smth can you tell me from where torch._C.TensorBase comes?

I am trying to find how to replace updateGradInput form thnn backend now that it is apparently gone.
I am trying to have access to backward weights. Former versions of pytorch were happy with:
_backend = type2backend[input.type()]
_backend.SpatialConvolutionMM_updateGradInput(
_backend.library_state,
input,
grad_output,
grad_input,
weight_feedback,
finput,
fgradInput,
ksize[0], ksize[1],
int(stride[0]), int(stride[1]),
int(padding[0]), int(padding[1])
)
but now in new Pytorch I can’t find how to replace these with torch._C functions. nn.ConvNd does not seem to provide access to backward weights.
I would appreciate any help in this regard.